What is difference between https protocol and SSL Certificate that we use in web browser?
Aren't both of these used to encrypt communication between client (browser) and server?
HTTPS is HTTP (HyperText Transfer Protocol) plus SSL (Secure Socket Layer). You need a certificate to use any protocol that uses SSL.
SSL allows arbitrary protocols to be communicated securely. It enables clients to (a) verify that they are indeed communicating with the server they expect and not a man-in-the-middle and (b) encrypt the network traffic so that parties other than the client and server cannot see the communication.
An SSL certificate contains a public key and certificate issuer. Not only can clients use the certificate to communicate with a server, clients can verify that the certificate was cryptographically signed by an official Certificate Authority. For example, if your browser trusts the VeriSign Certificate Authority, and VeriSign signs my SSL certificate, your browser will inherently trust my SSL certificate.
There's some good reading here: http://en.wikipedia.org/wiki/Transport_Layer_Security
Two pieces of one solution.
https is the protocol that defines how the client and server are going to negotiate a secure connection.
The SSL Certificate is the document that they will use to agree upon the servers authenticity.
HTPS is the new HTTPS.
HTTPS is highly vulnerable to SSL Stripping / MITM (man in the middle).
to quote adam langley's (google) blog imperial violet:
"HTTPS tends to cause people to give talks mocking certificate security and the ecosystem around it. "
The problem is that the page isn't served over HTTPS. It should have been, but when a user types a hostname into a browser, the default scheme is HTTP. The server may attempt to redirect users to HTTPS, but that redirect is insecure: a MITM attacker can rewrite it and keep the user on HTTP, spoofing the real site the whole time. The attacker can now intercept all the traffic to this perfectly well configured and secure website.
This is called SSL stripping and it's terribly simple and devastatingly effective. We probably don't see it very often because it's not something that corporate proxies need to do, so it's not in off-the-shelf devices. But that respite is unlikely to last very long and maybe it's already over: how would we even know if it was being used?
In order to stop SSL stripping, we need to make HTTPS the only protocol. We can't do that for the whole Internet, but we can do it site-by-site with HTTP Strict Transport Security (HSTS).
HSTS tells browsers to always make requests over HTTPS to HSTS sites. Sites become HSTS either by being built into the browser, or by advertising a header:
Strict-Transport-Security: max-age=8640000; includeSubDomains
The header is in force for the given number of seconds and may also apply to all subdomains. The header must be received over a clean HTTPS connection.
Once the browser knows that a site is HTTPS only, the user typing mail.google.com is safe: the initial request uses HTTPS and there's no hole for an attacker to exploit.
(mail.google.com and a number of other sites are already built into Chrome as HSTS sites so it's not actually possible to access accounts.google.com over HTTP with Chrome - I had to doctor that image! If you want to be included in Chrome's built-in HSTS list, email me.)
HSTS can also protect you, the webmaster, from making silly mistakes. Let's assume that you've told your mother that she should always type https:// before going to her banking site or maybe you setup a bookmark for her. That's honestly more than we can, or should, expect of our users. But let's say that our supererogatory user... ]
because of obstructing/very stupid link-rules for new users on stackoverflow i cannot give you the rest of adam's answer and you'll have to visit adam langley's blog yourself at
https://www.imperialviolet.org/2012/07/19/hope9talk.html
"Adam Langley works on both Google's HTTPS serving infrastructure and Google Chrome's network stack."
HTTPS is an application layer protocol. It can provide non-repudiation of individual requests or responses through digital signatures.
SSL is a lower level protocol and does not have this capability. SSL is a transport level encryption.
HTTPS is more flexible than SSL: an application can configure the level of security it needs. SSL has fewer options so it is easier to setup and administer.
Related
I was trying to get myself familiarised with basic concepts of https when I came across its encryption, which in a nutshell functions as follows,
Now I have seen QA engineers in my company use this tool called burp-suite to intercept request.
What I am confused about is even though the data flows through an encrypted channel, how can any interception tool like burp-suite manage to intercept the request.
Just to try it out I tried to intercept facebook request in burp-suite,
Here you can clearly see the test email test#gmail.com I used in the intercepted request.
Why is this data not encrypted according to https standards?
Or if it is then how do burp-suite manage to decrypt it?
Thank you.
Meta: this isn't really a development or programming question or problem, although Burp is sometimes used for research or debugging.
If you LOOK AT THE DOCUMENTATION on Using Burp Proxy
Burp CA certificate - Since Burp breaks TLS connections between your browser and servers, your browser will by default show a warning message if you visit an HTTPS site via Burp Proxy. This is because the browser does not recognize Burp's TLS certificate, and infers that your traffic may be being intercepted by a third-party attacker. To use Burp effectively with TLS connections, you really need to install Burp's Certificate Authority master certificate in your browser, so that it trusts the certificates generated by Burp.
and following the link provided right there
By default, when you browse an HTTPS website via Burp, the Proxy generates a TLS certificate for each host, signed by its own Certificate Authority (CA) certificate. ...
Using its own generated cert (and matching key, although the webpage doesn't talk about that because it isn't visible to people) instead of the cert from the real site allows Burp to 'terminate' the TLS session from the client, decrypting and examining the data, and then forwarding that data over a different TLS session to the real site, and vice versa on the response (unless configured to do something different like modify the data).
... This CA certificate is generated the first time Burp is run, and stored locally. To use Burp Proxy most effectively with HTTPS websites, you will need to install Burp's CA certificate as a trusted root in your browser.
This is followed by a warning about the risks, and a link to instructions to do so.
Having its own CA cert trusted in the browser means that the generated cert is accepted by the browser and everything looks mostly normal to the browser user (or other client).
I did create a ssl virtualhost in apache with a self-signed certificate.
In my opinion the configuration is correct however it is possible to access this url using "curl --insecure".
Searching at google, reading several tutorials and trying several configurations (diretives SSLVerifyClient|SSLVerifyDepth|AuthType|AuthBasicProvider|AuthUserFile|Require valid-user) I did not have any success in block this url using "curl --insecure"
I have been thinking in testing mod_security but I don't know if is the right way.
Could you give me some advice?
Thanks
Hudson
I suspect you may need to refine the understanding of sleep. You can't force clients to verify your SSL certificate. Besides, if you're using a self signed cert, it would never verify for anyone who didn't add the cert to their ca library.
You could block curl by rejecting requests based on their User Agent string. But that's just a header, and can be set by the client to anything ( such as a "valid" browser URL). If you really want to control clients, one way would be to use client certificates, which is the analog of the server certificate you set up, but on the client side. In that case, in addition to the client (ostensibly) verifying the server's cert, the server would verify the client's cert, providing a very strong and reliable mechanism to verify client access. Unfortunately, due the the difficulty of generating keys and cert signing requests, and signing certs for clients, client http certificates are not common. But they're very secure, and a good choice if you control both sides.
A middle ground would be to add an authentication layer into your app to control who can access it (you'd then refuse unauthenticated requests altogether)
In short, though, none of these things block curl. They block clients who cannot authenticate. I would recommend you not focus on the remote browser/client in use ( that's at the discretion of your http client). instead, focus on providing the security authentication you require. IMHO, trying to block client user-agents is a fool's errand. It's security by obscurity. Anyone can set any user-agent.
Can anyone tell me about SSL and how it can be used to secure a website?
SSL is an encryption method to send data securely over http. If you've seen a site with https:// at the beginning that means that it is using SSL. To use ssl to secure your own site, you need hosting that supports it (most do), you need to purchase an SSL certificate from a signing authority (Verisign is an example), and you need to write into your web application to switch to ssl when needed.
SSL doesn't secure your website- it merely encrypts the flow of information between the server and the browser. Despite SSL, you would still be vulnerable to Cross Site Scripting, non-authenticated requests etc...
I apologize before hand if this is an obvious question: can Apache 2.0 + SSL + basic authentication be trusted in order to secure a website? The way I see it, SSL creates a secure connection between the client and the server and thus any HTTP requests containing the clear-text password should not be a security issue.
thanks,
S.
You are correct, basic auth is secure as long as you can guarantee the connection is end-to-end encrypted. This means that you must configure the server to force SSL usage by redirecting HTTP requests to HTTPS, or not accept unencrypted connections at all for that URL.
"The only fully secure computer is one that is unplugged and turned off"
That said, Jim's answer is Good Enough if you accept SSL level of security :)
I setup my own open id provider on my personal server, and added a redirect to https in my apache config file. When not using a secure connection (when I disable the redirect) I can log in fine, but with the redirect I can't log in with this error message:
The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.
I'm guessing that this is because I am using a self signed certificate.
Can anyone confirm if the self signed certificate is the issue? If not does anyone have any ideas what the problem is?
The primary benefit of using SSL for your OpenID URL is that it gives the relying party a mechanism to discover if DNS has been tampered with. It's impossible for the relying party to tell if an OpenID URL with a self-signed certificate has been compromised.
There are other benefits you get from using SSL on your provider's endpoint URL (easier to establish associations, no eavesdropping on the extension data) which would still hold if you used a self-signed cert, but I would consider those to be secondary.
OpenID is designed in a redirect-transparent way. As long as the necessary key/value pairs are preserved at each redirect, either by GET or POST, everything will operate correctly.
The easiest solution to achieve compatibility with consumers that do not work with self-signed certificates is to use a non-encrypted end-point which redirects checkid_immediate and checkid_setup messages to an encrypted one.
Doing this in your server code is easier than with web server redirects as the former can more easily deal with POST requests, while also keeping code together. Furthermore, you can use the same end-point to handle all OpenID operations, regardless whether or not it should be served over SSL, as long as proper checks are done.
For example, in PHP, the redirect can be as simple as:
// Redirect OpenID authentication requests to https:// of same URL
// Assuming valid OpenID operation over GET
if (!isset($_SERVER['HTTPS']) &&
($_GET['openid_mode'] == 'checkid_immediate' ||
$_GET['openid_mode'] == 'checkid_setup'))
http_redirect("https://{$_SERVER['HTTP_HOST']}{$_SERVER['REQUEST_URI']}");
As the openid.return_to value was generated against a plain HTTP end-point, as far as the consumer is concerned, it is only dealing with a non-encrypted server. Assuming proper OpenID 2.0 operation with sessions and nonces, whatever information passed between the consumer and your sever should not reveal exploitable information. Operations between your browser and the OpenID server, which are exploitable (password snooping or session cookie hijacking) are done over an encrypted channel.
Aside from keeping out eavesdroppers, having authentication operations be carried out over SSL allows you to use the secure HTTP cookie flag. This adds yet another layer of protection for checkid_immediate operations, should you wish to allow it.
(Disclaimer: I'm new to OpenID, so I might be wrong here.) The communication between the Open ID Consumer (e.g., StackOverflow) and the Open ID Provider (your server) does not require HTTPS -- it will work just as fine and just as securely over plain HTTP. What you need to do is to configure your server to switch to HTTPS only when it shows you your login page. In that case, only your browser needs to concern itself with the self-signed certificate. You could import the certificate onto your PC and everything will be as secure as with, say, Verisign-issued certificate.
It sounds like it. The client of your OpenID server doesn't trust the root certification authority.