Right now I'm building a personal site/blog and have pretty much got it they way I want except I'm in two minds about how to add posts to it.
It's just me who'll be adding posts and to me having a user / name password to log in seems rather passé ;).
I'm looking in to alternatives to play around and experiment with and one idea I have is this:
Generate an asymmetric key, I personally keep the private and the site has the public key. When I try to add a post or modify any content the site will generate a random string, encrypt it with the public key and display it. I decrypt this using a little app I could whip together and pass the unencrypted string back to the site which will allow the modification to continue.
I'm just wondering about any caveats I should be on the look out for, or if anyone thinks this is a bad idea, perhaps an alternative I could try?
Why not just have a user name and password and either have your web browser remember the login, or send an authentication cookie back that doesn't expire. Use a self signed SSL cert to secure the communications channel. If you want to use public/private key crypto just setup an SSH tunnel and post from localhost on your server. Trust me, it's better to re-use known good crypto/security than to try to roll your own.
Why not go one stage further from your suggestion and put the encrypted string in to the URL?
For example, turn the current date and time into a string - eg. 0904240905 - encrypt it with your private key and add this to a URL, e.g. http://yoursite.com/admin/dksjfh4d392s where dksjfh4d392s is the encrypted string. You site then has a servlet which extracts the encrypted string from the URL, verifies that it decrypts to a recent time and then gives you a session cookie while allows you to perform admin tasks.
I think the asymmetric key is an elegant solution - but a username/password is almost certainly going to be easier to implement.
If you're building your own site then you are just doing it for kicks (otherwise you'd be using WordPress, Drupal, Django, etc.) so why not do things differently?
You might find that having to carry around your keymat app might get a little restrictive, if you find yourself wanting to blog but without the means to identify yourself.
But, that said, #Kurt has the right idea for crypto - DIY is almost certainly going to be worse than using something already tried and tested.
One of the wisest statements I ever heard about security was "don't try and re-invent it".
Online security has been through so many iterations that it's highly likely that any bright idea you come up with has some flaw that has previously been found, considered and fixed.
If you want "casual" security, secure your site with a user name and password. If you want "strong" security, stick an SSL certificate on top of it. If you want "bank" security, add in anti-keystroke security.
SSL client certificates do this anyway. Why not just use one of those?
The main reason more people don't use SSL client certificates is that they're an administrative nightmare - you have to get end-users to create keys, then sign their certificates, then make sure the end-users don't lose their keys (when they lose their laptop, upgrade to a new OS etc), which they usually do, so you have to sign YET MORE certificates when the end-users lose their private keys.
Related
Would it be beneficial to use SSL on a CMS backend? The only sensitive data I can think of is the password. The password, as it is now, is encrypted with MCRYPT_RIJNDAEL_256 and a key.
Any comments are appreciated :)
Yes you absolutely should. While you use a good encryption, unless you're adding a SALT to the password, it is likely easily looked up in a pre-computed hash table (rainbow table) and you can probably find it online within a matter of seconds.
CMS is something you definitely want to be secure about since it potentially gives someone complete access to the content of your website to maliciously alter as they wish, or possibly exploit from there to gain access to the server or disable your login.
Https doesn't really add any overhead, but it's very simple to add. I'd recommend it, even though it's not going to provide perfect security, some is better than none!
Being unable to locate a working php/javascript implementation of blowfish, I'm now considering using SHA1 hashing to implement web-based authentication, but the lack of knowledge in this particular field makes me unsure of whether the chosen method is secure enough.
The planned roadmap:
User's password is stored on the server as an MD5 hash.
Server issues a public key (MD5 hash of current time in milliseconds)
Client javascript function takes user password as input, and calculates its MD5 hash
Client then concatenates public key and password hash from above, and calculates SHA1 of the resulting string
Client sends SHA1 hash to the server, where similar calculations are performed with public key and user's password MD5 hash
Server compares the hashes, a match indicates successful authentication.
A mismatch indicates authentication failure, and server issues a new public key, effectively expiring the one already used.
Now, the problematic part is about concatenating two keys before SHA1, could that be prone to some kind of statistical or other attacks?
Is there any specific order in which keys should be concatenated to improve the overall quality (i.e. higher bits being more important to reliability of encryption)?
Thank you in advance.
If you're only using the 'public key' (which isn't actually a public key, it's a nonce, and should really be random, unless you really want it to be usable over a certain timeframe, in which case make sure you use HMAC with a secret key to generate it so an adversary cannot predict the nonce) to prevent replay attacks, and it's a fixed size, then concatenation might not be a problem.
That said, I'm a bit concerned that you might not have a well-thought-out security model. What attack is this trying to prevent, anyway? The user's password hash is unsalted, so a break of your password database will reveal plaintext passwords easily enough anyway, and although having a time-limited nonce will mitigate replay attacks from a passive sniffer, such a passive sniffer could just steal the user's session key anyway. Speaking of which, why not just use the session key as the nonce instead of a timestamp-based system?
But really, why not just use SSL? Cryptography is really hard to get right, and people much smarter than you or I have spent decades reviewing SSL's security to get it right.
Edit: If you're worried about MITM attacks, then nothing short of SSL will save you. Period. Mallory can just replace your super-secure login form with one that sends the password in plaintext to him. Game over. And even a passive attacker can see everything going over the wire - including your session cookie. Once Eve has the session cookie, she just injects it into her browser and is already logged in. Game over.
If you say you can't use SSL, you need to take a very hard look at exactly what you're trying to protect, and what kinds of attacks you will mitigate. You're going to probably need to implement a desktop application of some sort to do the cryptography - if MITMs are going around, then you cannot trust ANY of your HTML or Javascript - Mallory can replace them at will. Of course, your desktop app will need to implement key exchange, encryption and authentication on the data stream, plus authentication of the remote host - which is exactly what SSL does. And you'll probably use pretty much the same algorithms as SSL to do it, if you do it right.
If you decide MITMs aren't in scope, but you want to protect against passive attacks, you'll probably need to implement some serious cryptography in Javascript - we're talking about a Diffie-Hellman exchange to generate a session key that is never sent across the wire (HTML5 Web storage, etc), AES in Javascript to protect the key, etc. And at this point you've basically implemented half of SSL in Javascript, only chances are there are more bugs in it - not least of which is the problem that it's quite hard to get secure random numbers in Javascript.
Basically, you have the choice between:
Not implementing any real cryptographic security (apparently not a choice, since you're implementing all these complex authentication protocols)
Implementing something that looks an awful lot like SSL, only probably not as good
Using SSL.
In short - if security matters, use SSL. If you don't have SSL, get it installed. Every platform that I know of that can run JS can also handle SSL, so there's really no excuse.
bdonlan is absolutely correct. As pointed out, an adversary only needs to replace your Javascript form with evil code, which will be trivial over HTTP. Then it's game over.
I would also suggest looking at moving your passwords to SHA-2 with salts, generated using a suitable cryptographic random number generator (i.e. NOT seeded using the server's clock). Also, perform the hash multiple times. See http://www.jasypt.org/howtoencryptuserpasswords.html sections 2 and 3.
MD5 is broken. Do not use MD5.
Your secure scheme needs to be similar to the following:
Everything happens on SSL. The authentication form, the server-side script that verifies the form, the images, etc. Nothing fancy needs to be done here, because SSL does all the hard work for you. Just a simple HTML form that submits the username/password in "plaintext" is all that is really needed, since SSL will encrypt everything.
User creates new password: you generate a random salt (NOT based off the server time, but from good crypto random source). Hash the salt + the new password many times, and store the salt & resulting hash in your database.
Verify password: your script looks up salt for the user, and hashes the salt + entered password many times. Check for match in database.
The only thing that should be stored in your database is the salt and the hash/digest.
Assuming you have a database of MD5 hashes that you need to support, then the solution might be to add database columns for new SHA-2 hashes & salts. When the user logs in, you check against the MD5 hash as you have been doing. If it works, then follow the steps in "user creates new password" to convert it to SHA-2 & salt, and then delete the old MD5 hash. User won't know what happened.
Anything that really deviates from this is probably going to have some security flaws.
Lets say I have "admin" folder in my public_html and I don't want anyone except me to be able to access it. What if instead of password protecting it (using apache htaccess) I just rename it to "admin-7815696ecbf1c96e6894b779456d330e" and leave it open (with disabled folder indexes of course)?
People usually freak out from such "solution" as it seems extremely vulnerable. But is it really any worse than password protecting? I can't think about any major security risks comparing to password protecting. Would anyone be ever able to find out a name of this folder?
For a personal site, it's probably OK - but only you know the value of what you are protecting. One thing to be wary of is if you have webpages in that directory that link to external sources - by clicking a link to one of those external URLs you will (probably) pass on your "secret" url in the HTTP Referrer header. Also, it only takes on link back to your "secret" url and robots and spiders could be all over it and then you'll find it in Google. So, be very careful!
Bad idea - It's basically security by obscurity.
This is the sort of thing you'd use to protect a phpbb /install/ folder during an install, but not as a permanent solution.
Yes its a bad idea.
If you don't use a password, other systems won't treat it as such.
For example, your browser will now cache that url in its history. It won't do that automatically for passwords (at least not Firefox)
What about the list permission? What about internet hops, they'll see your URL.
If you start going around the security system, the security system won't know you want to be secure.
EDIT
Another way to think about it is, when software sees a password it goes, "This is an security issue and I will treat it as such." But for URLs, it goes "Meh, another piece of data"
Contrary to what others have said, this is not security through obscurity, and depending on how the random folder name is assigned, and how that name is protected, this can be a very secure solution.
First, choose the folder name from a large "space". Due to the size of the number in the question, it looks like that has been done. Personally, I'd choose a number randomly in a range up to between 2112 or 2128, then encode it to text using hexadecimal (base-64 would work in some contexts, but it's not handy for directory names).
The random component should be chosen from a cryptographic quality random number generator.
Then, protect the random name by transmitting and storing it only on secure media. This means, for example, only accessing the contents of the directory over HTTPS. Without SSL, an man-in-the-middle would learn the secret directory name and have unrestricted access.
If this is done by an administrator for their own use only, it's a quick and easy solution. If multiple parties need access to the directory, user names and passwords (which must also be transmitted only over a secure channel) quickly become more convenient because rights can be granted only by an administrator and can be revoked without affecting other users.
As Pyrolistical was saying, a randomized URL isn't protected with the same degree of care as a password would be. There's a lot of security research that goes into the systems that store and transmit passwords, and if you just use a randomized URL instead, you get none of that. (Well, you can force HTTPS for the URL, that gives you some benefit) But if you just want to deter casual snoopers, it's probably good enough. I've used that technique in the past when I wanted to share a URL with a few people, given that the data stored at the URL wasn't especially sensitive.
As for whether the randomized URL approach is appropriate for you, it depends - what web pages can you access from it? Typically "admin" means things like system control apps or database interfaces (phpMyAdmin and the like), and those sorts of things I wouldn't trust to a randomized URL scheme. Basically, if the web pages you're trying to protect are things that allow you to make changes to the system, go with password protection. But if they're read-only monitoring apps, like server statistics (and if there is no sensitive or personally identifying information involved), a randomized URL might be fine.
Honestly, though, why wouldn't you just set up password authentication, given how easy it is?
This is a form of security through obscurity - and it's bad security.
I use a random folder name with htaccess password and an SSL certificate. The password is a simple fallback, just in case someone clever (say the dude running the IT at the coffee shop) is able to get between your computer and the internet. The SSL encryption is necessary since htaccess passwords are not encrypted.
Whatever you do, make sure you don't have a link anywhere to your page.
I'd say it takes less effort to say, just kick on HTTP authentication in apache than it is to remember some 32+ character gibberish domain name.
i am starting to use cryptostream class. i may be wrong, if you encrypt something, close the app, and then try to decrypt it, it will not be able to because a different key will be generated. because i do need this functionality, i am wondering if it's possible to save the key in application settings and whether this is the right way to go?
If you always run your app under the same user account (it can be a local user or a domain user), the best option would be to use DPAPI. The advantage of using DPAPI is that you do not have to worry about the key (the system generates it for you). If you run the app under different user identities, then it gets more complex because the options that are available range from bad to worse (the major problem is: how do you protect your secret: key, password, passphrase, etc). Depending on what you want to do, you may not need to use encryption at all (e.g. if you want to encrypt a connection string, consider using integrated windows authentication, which does not require a password). For more info on the topic, check out this MSDN article: Safeguard Database Connection Strings and Other Sensitive Settings in Your Code; it may give you some ideas.
Lots of applications save the keys in configuration files. It's a common bad practice.
It's not secure but all secure options are hard to implement. There are options using different factors,
You can derive the key from a password using PBE (password-based encryption). But you have to enter a password to start your application. This is so called "What you know" factor.
Put the key in a smartcard. This is very secure but you need to have access to the card on the machine. This is called "What you have".
Ignore other schemes involving encrypting keys with yet another key. It doesn't really change the security strength.
One of the goals of OpenID is to be resistant against the failure of any one corporation. This sounds good, but there is another problem: if the site your ID is hosted on goes down, so does your ID. I thought that there must be a login system that would be totally resistant to failure.
My idea is like this: I go to a website and I have to login. I give them my public key. The website sends me back some random data. I sign this data with my private key and send it back to them. If the signature is valid, I get logged in. This has the advantage that my ID is just my public key, so I don't rely on any external site.
To make it so that users don't have to remember there keys, the system could also optionally allow an OpenID-like system where my keys are hosted on some server and the original site redirects me there to login, and that site signs the data and sends it back to the original site, and I am logged in. This method would work similar to OpenID, but would allow me to back up my keys so if that site goes down, I can use another site.
Is this a practical system? Am I just wasting my time? Should I not reinvent the wheel and just use OpenID?
Identity cards, like Windows Cardspace, are a good alternative because they are stored on your computer and can be backed up. This is technically called the Identity Metasystem or Identity Cards.
This is different from a centralized identity service like OpenID. The good thing about the OpenID system is that the chance of everyone's identity server of going down is pretty small. However, individually, each user can possibly experience an outage.
The InfoCard system by Microsoft is a good solution, although it has not been very popular.
This is not a new problem-- email signing and encryption is a similar solution to private/public key system. GPG actually does have keystores where you can post your public keys so that people can verify your signatures.
If you're really wanting to avoid any possibility of an identity server being down(a pretty strict requirement), use CardSpace, or some other private/public key system where the users themselves have the keys, and only have to demonstrate that they have them by some challenge-response hashing algorithm.
Also, the cardspace solution is not strictly a Microsoft thing, there are plenty of applications for all operating systems. I believe it is a public standard.
This is very similar to how HTTPS works.
With your idea, you need to take good care of your private key. What if your comnputer crashes, etc. Also what about logging in from someone eleses computer? Would you trust putting a thumb drive with your private key on it into someone elses computer?
This is also very similar to what the military does with ID cards with private keys embeded on a chip that they issue to service members. A member needs to put his ID into a special reader as well as logon with a unique ID and password that must be changed periodically. This is to take care of the case where a member looses his ID and someone else trys to use it.
So I guess my answer is yes, you have a good idea, but perhaps you just need to refine it some more.
Use OpenID. It's so easy to set up and you don't have to debug it.
Windows CardSpace supports something like this. But it hasn't really taken off.
The problem with your system is that if you lose your key due to hardware problems or a system crash, you lose your only way to get to the sites you use that key for.
I would say OpenID is sufficient.
I see a couple issues with your system
I need to have that key to login. If I go out, and don't have the file on my person or hosted remotely, I'm out of luck. I also may not be able to enter it on my cell phone or some other device.
You will also need to protect the key from loss. Which likely means password protecting the key, which takes away alot of the convenience of the system.
What needs to happen for OpenID to be more resistant to ID provider failure, is for sites to allow multiple OpenIDs to be associated with an account, just like SO does. So, your idea may end being workable, but I think the effort to get it working and adopted would be much greater than the work to get widespread adoption of sites allowing multiple OpenIDs.
Also, check this link for a description of TLS Client Authentication