I was wondering if the twisted webserver offers the possibility to restrict access to some resources using client certificate based authentication and to allow access to other resources without certs.
I searched trough the questions and found this posting: Client side SSL certificate for a specific web page
Now my question is if someone knows if twisted has implemented the ssl renegotiation and how an example would look like.
Or has there been a different approach since then?
Just to make things clear and to give additional information:
What I actually want to achieve is something like this:
A new user visits a site and has not yet granted access to the resource because he has no token yet that allows him to view the site.
Therefore, he gets redirected to a login resource that is asking for a client certificate. If everything is correct, additional data retrieved from the certificate is stored in the session, which makes up the token.
He then gets redirected back to the entry site, the token is validated, and according to his authorization level specific content is displayed
If I understood you correct Jean-Paul, this seems to be possible to implement with your strategy, right?
Correct me if I'm missing something or doing it wrong.
It doesn't seem to me that SSL renegotiation is particularly applicable here. What you actually want to do is authorize a request based on the client certificate presented. The only reason SSL renegotiation might be required is if you want the client to be able to request multiple resources over a single persistent HTTPS connection, presenting a different client certificate for each. This strikes me as unlikely to be necessary (or at least, the reasons for wanting this - rather than just letting the client establish a new HTTPS connection, or just authorizing all your resources based on a single client certificate - are obscure).
Authorization in Twisted Web is straightforward. Many prefer a capability-like approach, where the server selects a resource object based on the credentials presented by the client. This resource object has complete control over its content and its children, so by selecting one appropriate for the credentials presented, you completely control what content is available to what clients.
You can read about twisted.web.guard in the http auth entry in the web in 60 seconds series.
This will familiarize you with the specifics of authentication and authorization in Twisted Web. It will not tell you how to authenticate or authorize based on an SSL client certificate, though.
To do that, you'll need to write something similar to HTTPAuthSessionWrapper - but which inspects the client SSL certificate instead of implementing HTTP authentication as HTTPAuthSessionWrapper does. This will involve implementing:
IResource to inspect the transport over which the request is received to extract the client certificate
implementing a credentials type which represents an X509 certificate
implementing a credentials checker which can authenticate your users based on their X509 certificate
and possibly implementing a realm which can authorize users (though you may have written this already, since it is orthogonal to the authentication step, and therefore is reusable even if you don't want to authenticate with SSL certificates)
This functionality would be quite welcome in Twisted itself, so I'm sure you can find more help from the Twisted development IRC channel (#twisted-dev on freenode), and I hope you'll contribute whatever you write back to Twisted!
Related
We have a HTTP endpoint where a form request is posted containing transaction data from a 3rd party https website.
We are investigating ways that our HTTP endpoint can contain code to check that the host that posted the request is the 3rd party website and no-one else (i.e. a hacker).
Is there any way our HTTP endpoint can authenticate with the website where the posted form request originated? Maybe by SSL Certificate Authentication?
Many thanks in advance.
To guarantee that the server on the other side is who they say they are the safest way is to have them use an SSL Certificate. If the they also need to trust who you are then each side should have their own SSL Certificate.
The IP Range solution provided in the comment could be a possible hack but it's quite brittle and it couldn't be applied in a very serious environment.
The Shared Key solution will work and it's reliable but you have to change keys from time to time depending on the volume of traffic between the two servers.
Hope this helps.
It might be better to use message-level security instead of transport-level security (SSL/TLS).
The third party website would sign the message using its certificate (or to be precise, using the private key matching its certificate), and your website would verify this signature.
This could allow for that message to be relayed by the user's browser, without needing a direct connection between the two servers.
This sort of mechanism already exists in the Identity Management world, for example with SAML and Shibboleth. (You can still have direct connections between the servers to get additional information too.)
I ask this because I work on an application where the X-AUTH-TOKEN can be copied from one request to another and impersonate another person. This makes me nervous, but I'm told since we're going to use HTTPS we don't have to worry about anything.
So, my question is: Is it good enough trust SSL to protect against stealing headers used for auth/sessions?
Thanks,
Using HTTPS encryption will indeed prevent someone from stealing your authentication token if they can intercept the traffic. It won't necessarily prevent a man-in-the-middle attack though unless the client enables peer certificate checking.
This question from the security stackexchange describes how to implement MITM attacks against SSL. If I can convince a client running HTTPS to connect to my server, and they accept my certificate then I can steal your authentication token and re-use it. Peer certificate validation is sometimes a bit of a pain to setup but it can give you a higher chance of whomever you are connecting to are who they say that are.
"Good enough" is a relative definition and depends on your level of paranoia. Personally I would be happy that my connection is secure enough with HTTPS and peer certificate validation turned on.
Presumably also your authentication token times out so the attack window would be time limited. For example the OpenStack authentication token is by default valid for 24 hours before it expires and then you are required to obtain a new one.
The HTTPS standard implements HTTP entirely on top of SSL/TLS. Because of this, practically everything except for the DNS query is encrypted. Since headers are part of the request and response, and only sent after the secure-channel has been created, they are precisely as secure as the implementation of HTTPS on the given server.
HTTPS is an end-to-end encryption of the entire HTTP session, including the headers, so on the face of it, you should be safe from eavesdropping.
However, that is only part of the story: depending on how the clients are actually connecting (is this a website or an API service?), it may still be possible to trick them into sending the data to the wrong place, for instance:
Presenting a "man in the middle" site with an invalid SSL certificate (since it won't be from a trusted authority, or won't be for the right domain) but convincing users to by-pass this check. Modern browsers make a big fuss about this kind of thing, but libraries for connecting to APIs might not.
Presenting a different site / service end-point at a slightly different URL, with a valid SSL certificate, harvesting authentication tokens, and using them to connect to the real service.
Harvesting the token inside the client application, before it is sent over HTTPS.
No one approach to security is ever sufficient to prevent all attacks. The main consideration should be the trade-off between how complex additional measures would be to implement vs the damage that could be done if an attacker exploited you not doing them.
I have a server with SSL certificate and would like to implement a WCF service with username authentication. Can anyone point me to a simple current example?
I find lots that use the 509 certificate and I don't understand why that additional piece would be needed. I don't think I want to give the certificate I have for the SSL to the client either.
I think to use SSL is just setting up the web.config appropriately with wshttpbinding and using https: in the uri that calls the service.
In this case I will have only one or two users (applications at the client actually) that need to use the service so I don't see the overhead for building a database for the store for lots of login credentials or anything like that. I've read you can pass the credentials in the request header. I hope I can just have the service itself check them without tons of overhead.
I'm really struggling to get how a simple authenticate can work for a service but I know I need something in addition to the service being SSL encrypted.
Edit: Hummm having read more I get the impression that using https binding for the message circumvents any notion of username credentials without something mysterious with certificates going on. I hope I haven't wasted money on the ssl certificate for the server at this point.
Can the IP of the requestor be used to allow the service for a known client only?
If you only need a couple of users, then use the inbuilt Windows authentication - create Windows user accounts, put the right security option in your binding config and you're done. If you're using SOAP from a non-windows client you'll have to perform some tricks to make it communicate properly (typically we found using NTLM authentication from PHP client required the use of curl rather than the PHP SOAP client library, but I understand that if you use AD accounts this becomes much easier).
WCF docs have a full description of auth options for you.
I want to be able to set up a web application to automatically (i.e. on a cron run) send a POST request to a remote website. The remote website requires a username/password combination to be sent as part of the POST data. I want the web application to be able to make the POST requests of the remote website without requiring the user to provide the password to be sent with the POST data, each time the request is made.
It seems to me that the only way to do this is to store passwords directly in the database, so that the cron run can execute a POST request that includes the password as part of its POST data. Without storing the password in some form in the database, it seems it would be impossible to provide it in the POST data, unless the user provides it each time the request is made.
Question 1: Am I mistaken and somehow overlooking something logical?
Question 2: Assuming I have to store the passwords in the database, what is the safest procedure for doing so? (MD5 and similar one-way encryption clearly will not work because I have to send an unencrypted password in the POST request.)
Thank you for your help!
a. if you don't know the password... you can't authenticate, that's the idea of a password !
b. if you need to know the password - you need to save it in a decryptable way - hence - less secured.
c. if you own the site, you can use a cookie with a very long timeout value, but - you still need to authenticate at least once.
d. unless you're guarding money / rocket science, you need to encrypt the password and store it in the DB and decrypt it every time before use, at least you are guarded from DB theft.
e. make sure you're authenticating over secure channel (as https) so the password will no be sent as clear text.
One good solution is probably to use SSL (i.e. HTTPS). You can create a certificate authority on the server side, then have this certificate authority sign a client certificate that you generate. Make sure the HTTP server is configured to trust the newly created certificate authority.
Once this is done, you should install the certificate on the client side. The client must present the certificate when talking to the HTTP server. You have to configure the HTTP server to require a trusted certificate when POSTing to your secure URLs.
Awesome example of how to do this with Apache HTTPD is posted right here!
The document I linked doesn't describe how to set up the certificate authority and create self-signed certificates, but there are tons of examples out there, for example here.
This is a good solution because:
no passwords are stored in the clear
if the private key of the client's certificate is stolen or compromised, you can revoke it on the server side
The key here is that the client is providing its credentials to the server, which is the opposite of what is usually done in a browser context. You can also have the client trust your newly created certificate authority so that it knows it's talking to the right server and not a man in the middle.
Given that you have to send the password in clear-text and do it repeatedly without user-interaction you'll need to store and retrieve the same from a data-store (file/database/memory).
What you really need to consider is the last-line-of-security of the password store.
Whether you encrypt it or not doesn't matter. The person/program with access to the data or the cipher key will be able to read that password.
Sort this issue out, document it - (this becomes your security policy for the app) and then implement it.
Security is only a level of difficulty you implement to lessen a risk.
Fortunately, Tumblr now implements OAuth, which solves this problem.
I setup my own open id provider on my personal server, and added a redirect to https in my apache config file. When not using a secure connection (when I disable the redirect) I can log in fine, but with the redirect I can't log in with this error message:
The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.
I'm guessing that this is because I am using a self signed certificate.
Can anyone confirm if the self signed certificate is the issue? If not does anyone have any ideas what the problem is?
The primary benefit of using SSL for your OpenID URL is that it gives the relying party a mechanism to discover if DNS has been tampered with. It's impossible for the relying party to tell if an OpenID URL with a self-signed certificate has been compromised.
There are other benefits you get from using SSL on your provider's endpoint URL (easier to establish associations, no eavesdropping on the extension data) which would still hold if you used a self-signed cert, but I would consider those to be secondary.
OpenID is designed in a redirect-transparent way. As long as the necessary key/value pairs are preserved at each redirect, either by GET or POST, everything will operate correctly.
The easiest solution to achieve compatibility with consumers that do not work with self-signed certificates is to use a non-encrypted end-point which redirects checkid_immediate and checkid_setup messages to an encrypted one.
Doing this in your server code is easier than with web server redirects as the former can more easily deal with POST requests, while also keeping code together. Furthermore, you can use the same end-point to handle all OpenID operations, regardless whether or not it should be served over SSL, as long as proper checks are done.
For example, in PHP, the redirect can be as simple as:
// Redirect OpenID authentication requests to https:// of same URL
// Assuming valid OpenID operation over GET
if (!isset($_SERVER['HTTPS']) &&
($_GET['openid_mode'] == 'checkid_immediate' ||
$_GET['openid_mode'] == 'checkid_setup'))
http_redirect("https://{$_SERVER['HTTP_HOST']}{$_SERVER['REQUEST_URI']}");
As the openid.return_to value was generated against a plain HTTP end-point, as far as the consumer is concerned, it is only dealing with a non-encrypted server. Assuming proper OpenID 2.0 operation with sessions and nonces, whatever information passed between the consumer and your sever should not reveal exploitable information. Operations between your browser and the OpenID server, which are exploitable (password snooping or session cookie hijacking) are done over an encrypted channel.
Aside from keeping out eavesdroppers, having authentication operations be carried out over SSL allows you to use the secure HTTP cookie flag. This adds yet another layer of protection for checkid_immediate operations, should you wish to allow it.
(Disclaimer: I'm new to OpenID, so I might be wrong here.) The communication between the Open ID Consumer (e.g., StackOverflow) and the Open ID Provider (your server) does not require HTTPS -- it will work just as fine and just as securely over plain HTTP. What you need to do is to configure your server to switch to HTTPS only when it shows you your login page. In that case, only your browser needs to concern itself with the self-signed certificate. You could import the certificate onto your PC and everything will be as secure as with, say, Verisign-issued certificate.
It sounds like it. The client of your OpenID server doesn't trust the root certification authority.