I setup my own open id provider on my personal server, and added a redirect to https in my apache config file. When not using a secure connection (when I disable the redirect) I can log in fine, but with the redirect I can't log in with this error message:
The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.
I'm guessing that this is because I am using a self signed certificate.
Can anyone confirm if the self signed certificate is the issue? If not does anyone have any ideas what the problem is?
The primary benefit of using SSL for your OpenID URL is that it gives the relying party a mechanism to discover if DNS has been tampered with. It's impossible for the relying party to tell if an OpenID URL with a self-signed certificate has been compromised.
There are other benefits you get from using SSL on your provider's endpoint URL (easier to establish associations, no eavesdropping on the extension data) which would still hold if you used a self-signed cert, but I would consider those to be secondary.
OpenID is designed in a redirect-transparent way. As long as the necessary key/value pairs are preserved at each redirect, either by GET or POST, everything will operate correctly.
The easiest solution to achieve compatibility with consumers that do not work with self-signed certificates is to use a non-encrypted end-point which redirects checkid_immediate and checkid_setup messages to an encrypted one.
Doing this in your server code is easier than with web server redirects as the former can more easily deal with POST requests, while also keeping code together. Furthermore, you can use the same end-point to handle all OpenID operations, regardless whether or not it should be served over SSL, as long as proper checks are done.
For example, in PHP, the redirect can be as simple as:
// Redirect OpenID authentication requests to https:// of same URL
// Assuming valid OpenID operation over GET
if (!isset($_SERVER['HTTPS']) &&
($_GET['openid_mode'] == 'checkid_immediate' ||
$_GET['openid_mode'] == 'checkid_setup'))
http_redirect("https://{$_SERVER['HTTP_HOST']}{$_SERVER['REQUEST_URI']}");
As the openid.return_to value was generated against a plain HTTP end-point, as far as the consumer is concerned, it is only dealing with a non-encrypted server. Assuming proper OpenID 2.0 operation with sessions and nonces, whatever information passed between the consumer and your sever should not reveal exploitable information. Operations between your browser and the OpenID server, which are exploitable (password snooping or session cookie hijacking) are done over an encrypted channel.
Aside from keeping out eavesdroppers, having authentication operations be carried out over SSL allows you to use the secure HTTP cookie flag. This adds yet another layer of protection for checkid_immediate operations, should you wish to allow it.
(Disclaimer: I'm new to OpenID, so I might be wrong here.) The communication between the Open ID Consumer (e.g., StackOverflow) and the Open ID Provider (your server) does not require HTTPS -- it will work just as fine and just as securely over plain HTTP. What you need to do is to configure your server to switch to HTTPS only when it shows you your login page. In that case, only your browser needs to concern itself with the self-signed certificate. You could import the certificate onto your PC and everything will be as secure as with, say, Verisign-issued certificate.
It sounds like it. The client of your OpenID server doesn't trust the root certification authority.
Related
I did create a ssl virtualhost in apache with a self-signed certificate.
In my opinion the configuration is correct however it is possible to access this url using "curl --insecure".
Searching at google, reading several tutorials and trying several configurations (diretives SSLVerifyClient|SSLVerifyDepth|AuthType|AuthBasicProvider|AuthUserFile|Require valid-user) I did not have any success in block this url using "curl --insecure"
I have been thinking in testing mod_security but I don't know if is the right way.
Could you give me some advice?
Thanks
Hudson
I suspect you may need to refine the understanding of sleep. You can't force clients to verify your SSL certificate. Besides, if you're using a self signed cert, it would never verify for anyone who didn't add the cert to their ca library.
You could block curl by rejecting requests based on their User Agent string. But that's just a header, and can be set by the client to anything ( such as a "valid" browser URL). If you really want to control clients, one way would be to use client certificates, which is the analog of the server certificate you set up, but on the client side. In that case, in addition to the client (ostensibly) verifying the server's cert, the server would verify the client's cert, providing a very strong and reliable mechanism to verify client access. Unfortunately, due the the difficulty of generating keys and cert signing requests, and signing certs for clients, client http certificates are not common. But they're very secure, and a good choice if you control both sides.
A middle ground would be to add an authentication layer into your app to control who can access it (you'd then refuse unauthenticated requests altogether)
In short, though, none of these things block curl. They block clients who cannot authenticate. I would recommend you not focus on the remote browser/client in use ( that's at the discretion of your http client). instead, focus on providing the security authentication you require. IMHO, trying to block client user-agents is a fool's errand. It's security by obscurity. Anyone can set any user-agent.
I've been reading and trying to comprehend the differences in browser side security. From what I gather, SSL is used to keep people from sniffing the traffic you send to the server. This allows you to send a password to a server in clear text...right? As long as you are in an SSL encrypted session you don't have to worry about hashing the password first or anything weird, just send it straight to the server along with the username. After the user authenticates you send them back a JWT and then all future requests to the server should include this JWT assuming they are trying to access a secured area. This allows the server to not even have to check the password, all the server does is verify the signature and that's all the server cares about. As long as the signature is verified you give the client whatever info they are requesting. Have I missed something?
You are correct. "This allows the server not to even have to check the password." Why would you have to check a password on each request?
A JWT is a means of verifying authentication. It is generated upon a successful authentication request and hence forth passed with each request to let the server know this user is authenticated.
It can be used to store arbitrary values such as user_id or api_key but they are not very secure so don't store any valuable information here.
Be wary though, if a plain JWT is intercepted by a third party, it can assume this user's session and possible data.
SSL is a lower level form of security, encrypting every request from and to the server to prevent interception and retains integrity.
SSL is achieved by (purchasing) an SSL certificate and installing it on your server. Basically an SSL certificate is a small data file that binds a cryptographic key to an 'organisation'. Once installed succesfully, HTTPS requests (on port 443 by default) are possible.
We have a HTTP endpoint where a form request is posted containing transaction data from a 3rd party https website.
We are investigating ways that our HTTP endpoint can contain code to check that the host that posted the request is the 3rd party website and no-one else (i.e. a hacker).
Is there any way our HTTP endpoint can authenticate with the website where the posted form request originated? Maybe by SSL Certificate Authentication?
Many thanks in advance.
To guarantee that the server on the other side is who they say they are the safest way is to have them use an SSL Certificate. If the they also need to trust who you are then each side should have their own SSL Certificate.
The IP Range solution provided in the comment could be a possible hack but it's quite brittle and it couldn't be applied in a very serious environment.
The Shared Key solution will work and it's reliable but you have to change keys from time to time depending on the volume of traffic between the two servers.
Hope this helps.
It might be better to use message-level security instead of transport-level security (SSL/TLS).
The third party website would sign the message using its certificate (or to be precise, using the private key matching its certificate), and your website would verify this signature.
This could allow for that message to be relayed by the user's browser, without needing a direct connection between the two servers.
This sort of mechanism already exists in the Identity Management world, for example with SAML and Shibboleth. (You can still have direct connections between the servers to get additional information too.)
I ask this because I work on an application where the X-AUTH-TOKEN can be copied from one request to another and impersonate another person. This makes me nervous, but I'm told since we're going to use HTTPS we don't have to worry about anything.
So, my question is: Is it good enough trust SSL to protect against stealing headers used for auth/sessions?
Thanks,
Using HTTPS encryption will indeed prevent someone from stealing your authentication token if they can intercept the traffic. It won't necessarily prevent a man-in-the-middle attack though unless the client enables peer certificate checking.
This question from the security stackexchange describes how to implement MITM attacks against SSL. If I can convince a client running HTTPS to connect to my server, and they accept my certificate then I can steal your authentication token and re-use it. Peer certificate validation is sometimes a bit of a pain to setup but it can give you a higher chance of whomever you are connecting to are who they say that are.
"Good enough" is a relative definition and depends on your level of paranoia. Personally I would be happy that my connection is secure enough with HTTPS and peer certificate validation turned on.
Presumably also your authentication token times out so the attack window would be time limited. For example the OpenStack authentication token is by default valid for 24 hours before it expires and then you are required to obtain a new one.
The HTTPS standard implements HTTP entirely on top of SSL/TLS. Because of this, practically everything except for the DNS query is encrypted. Since headers are part of the request and response, and only sent after the secure-channel has been created, they are precisely as secure as the implementation of HTTPS on the given server.
HTTPS is an end-to-end encryption of the entire HTTP session, including the headers, so on the face of it, you should be safe from eavesdropping.
However, that is only part of the story: depending on how the clients are actually connecting (is this a website or an API service?), it may still be possible to trick them into sending the data to the wrong place, for instance:
Presenting a "man in the middle" site with an invalid SSL certificate (since it won't be from a trusted authority, or won't be for the right domain) but convincing users to by-pass this check. Modern browsers make a big fuss about this kind of thing, but libraries for connecting to APIs might not.
Presenting a different site / service end-point at a slightly different URL, with a valid SSL certificate, harvesting authentication tokens, and using them to connect to the real service.
Harvesting the token inside the client application, before it is sent over HTTPS.
No one approach to security is ever sufficient to prevent all attacks. The main consideration should be the trade-off between how complex additional measures would be to implement vs the damage that could be done if an attacker exploited you not doing them.
I was wondering if the twisted webserver offers the possibility to restrict access to some resources using client certificate based authentication and to allow access to other resources without certs.
I searched trough the questions and found this posting: Client side SSL certificate for a specific web page
Now my question is if someone knows if twisted has implemented the ssl renegotiation and how an example would look like.
Or has there been a different approach since then?
Just to make things clear and to give additional information:
What I actually want to achieve is something like this:
A new user visits a site and has not yet granted access to the resource because he has no token yet that allows him to view the site.
Therefore, he gets redirected to a login resource that is asking for a client certificate. If everything is correct, additional data retrieved from the certificate is stored in the session, which makes up the token.
He then gets redirected back to the entry site, the token is validated, and according to his authorization level specific content is displayed
If I understood you correct Jean-Paul, this seems to be possible to implement with your strategy, right?
Correct me if I'm missing something or doing it wrong.
It doesn't seem to me that SSL renegotiation is particularly applicable here. What you actually want to do is authorize a request based on the client certificate presented. The only reason SSL renegotiation might be required is if you want the client to be able to request multiple resources over a single persistent HTTPS connection, presenting a different client certificate for each. This strikes me as unlikely to be necessary (or at least, the reasons for wanting this - rather than just letting the client establish a new HTTPS connection, or just authorizing all your resources based on a single client certificate - are obscure).
Authorization in Twisted Web is straightforward. Many prefer a capability-like approach, where the server selects a resource object based on the credentials presented by the client. This resource object has complete control over its content and its children, so by selecting one appropriate for the credentials presented, you completely control what content is available to what clients.
You can read about twisted.web.guard in the http auth entry in the web in 60 seconds series.
This will familiarize you with the specifics of authentication and authorization in Twisted Web. It will not tell you how to authenticate or authorize based on an SSL client certificate, though.
To do that, you'll need to write something similar to HTTPAuthSessionWrapper - but which inspects the client SSL certificate instead of implementing HTTP authentication as HTTPAuthSessionWrapper does. This will involve implementing:
IResource to inspect the transport over which the request is received to extract the client certificate
implementing a credentials type which represents an X509 certificate
implementing a credentials checker which can authenticate your users based on their X509 certificate
and possibly implementing a realm which can authorize users (though you may have written this already, since it is orthogonal to the authentication step, and therefore is reusable even if you don't want to authenticate with SSL certificates)
This functionality would be quite welcome in Twisted itself, so I'm sure you can find more help from the Twisted development IRC channel (#twisted-dev on freenode), and I hope you'll contribute whatever you write back to Twisted!