How to design a secure token authentication protocol using a 6-digit number? - authentication

I have a security number generator device, small enough to go on a key-ring, which has a six digit LCD display and a button. After I have entered my account name and password on an online form, I press the button on the security device and enter the security code number which is displayed.
I get a different number every time I press the button and the number generator has a serial number on the back which I had to input during the account set-up procedure.
I would like to incorporate similar functionality in my website. As far as I understand, these are the main components:
Generate a unique N digit aplha-numeric sequence during registration and assign to user (permanently)
Allow user to generate an N (or M?) digit aplha-numeric sequence remotely
For now, I dont care about the hardware side, I am only interested in knowing how I may choose a suitable algorithm that will allow the user to generate an N (or M?) long aplha-numeric sequence - presumably, using his unique ID as a seed
Identify the user from the number generated in step 2 (which decryption method is the most robust to do this?)
I have the following questions:
Have I identified all the steps required in such an authentication system?, if not please point out what I have missed and why it is important
What are the most robust encryption/decryption algorithms I can use for steps 1 through 3 (preferably using 64bits)?

Your server has a table of client IDs and keys. Each client also knows its own key.
The server also maintains a counter for each client, initialised to zero. Each client maintains a counter, also initialised to zero.
When the button is pressed on the client, it generates a HMAC of the current counter value, using its key as the HMAC key. It generates an alphanumeric code from the HMAC output and displays that to the user (to send to the server). The client increments its counter value.
When an authentication request is recieved by the server, it repeats the same operations as the client, using the stored key and counter for that client. It compares the alphanumeric code it generated with the one recieved from the client - if they match, the client is authenticated. If they do not match, the server increments its counter for that client and repeats the process, for a small number of repetitions (say, ~10). This allows the server to "catch up" if the client counter has been incremented without contacting the server.
If the counter rolls over to zero, the server should not accept any more authentication requests for that client ID, until it is issued a new key.
There are extensions to this basic protocol: For example, instead of a counter, you can use synchronised clocks on the server and client (with the value changing every N seconds instead of every button press).

What you're describing is called an HOTP, or HMAC-based One Time Password. Implementation is described in this RFC, and unless you have a compelling reason not to, I'd strongly suggest implementing it as-is, since it's been vetted by cryptographers, and is believed secure. Using this will also give you compatibility with existing systems - you should be able to find HOTP-compatible tokens and software apps, like Google Authenticator for Android.

Related

How to prevent usage of signatures that were given by fake websites after signing eth message?

There're plenty of sites, where you have to sign their 'sign in' message in order to get JWT from them. For example, https://www.cryptokitties.co uses such login system. It verifies the signature on the back-end and sends JWT back if address matches. It works good, but such approach disturbs me in the matter of security.
Assume, that someone has created absolutely identical to cryptokitties fake website. User hasn't noticed that domain is different, signs the same message ("To avoid digital cat burglars, sign below to authenticate with CryptoKitties") and at this point he already provided scammer with his signature and address, as message was the same, therefore it will work on original website. So basically you can loose your account by signing the same message on the completely different site. The saddest part, is that you cannot reset the private key, which means that your account has gone for good.
I'm not an expert, but it seems to me like a huge hole in security.
The solution I'm thinking about, is to encrypt the signature on the client before sending it on the back-end. With such approach, back-end will only send you a JWT if you've signed a message on our front-end. So, firstly back-end decrypts the signature and then verifies the message and address. It will skip signatures which were created on other sites as the decryption will fail.
So far we eliminated fake websites problem. But there is another one: attacker can intercept an already encrypted signature and use it on our site. And once again there is no way to reset the signature, it'll remain the same. So what I came up with is, signature must be disposable, it can be used only once. Before signing a message client requests from the back-end special random number linked with according wallet. Based on this number we build signature message like this: "To avoid digital cat burglars, sign below to authenticate with CryptoKitties #564324". Firstly, back-end decrypts the signature, verifies the address and then checks whether specified random number exists in database. Once login is succeeded, the random number is deleted from the database. Now, even if user looses his signature, it can't be used by attacker, because it's already expired.
What do you think? Does described approach make sense?
You have the right idea with "signature must be disposable". The concept is called a nonce (a value used to protect private communications by preventing replay attacks).
Your following logic is correct as well, except that you don't need to delete the nonce from the database, but rather rotate it. I.e. update the value to a new pseudo-random (or at least hard to guess) value.

API authentication with mobile app (by SMS)

We are currently tasked with implementing a (preferably simple) authentication system for a mobile application communication with a RESTful API. The backend has user-specific data, identified by the user's phone number. I am trying to understand more about security in general, the different methods there are and why they work the way they work.
I thought of a simple authentication system:
The client sends a verification request to the api which includes their phone number and a generated guid.
The server sends an SMS message to the phone number with a verification code.
The client verifies their device by sending their unique guid, phone number and verification code.
The server responds with some kind of access token which the client can use for further requests.
I have the following questions:
Are there any major flaws in this approach?
Assuming we use HTTPS, is it secure enough to send the data otherwise unencrypted?
Can access tokens be stored on mobile devices safely so that only our app can read them?
Anything else we haven't thought of?
We already figured that when the mobile phone is stolen or otherwise compromised, the data is no longer secure, but that is a risk that is hard to overcome. Access tokens could be valid temporarily to minimize this risk.
I am assuming this approach is way to simple and there is a huge flaw somewhere :) Can you enlighten me?
There is a flaw. The system is susceptible to a brute-force attack.
Suppose I am an attacker. I will generate a guid for myself and send it along with some arbitrary phone number.
Next, I will just bruteforce my way through the possible SMS codes - if it's 6 digits, there's only 10^6 combinations. The bruteforce will be a matter of seconds - and then I will gain acess to the data of the person having this phone.
Also, as was pointed out in the comment by Filou, one can force you to send you arbitrary number of SMS, effectively making you sustain a financial loss at no cost.
There's also no valid defense from this attack:
If there is limited amount (N) of attempts for a given UID, I will
re-generate the guid every N attempts.
If there's a limit of requests per phone per amount of time, I can execute a DoS/DDoS attack by flooding every possible number with fake requests - hence, noone will be able to perform any requests.
A login/password or certificate authenication is mandatory before an SMS. Also:
Never use things like GUID in cryptography/security protocols. GUIDs are deterministic (i.e., knowing one value, you can predict future ones). Use crypto-libraries built-in functions for generating random streams
Never try to design security protocols yourself. Never. There's an awful lot of caveats even SSL 1.0 creators fell to - and they were sharp guys, mind you. Better copy common and proven schemes (Google's auth is a great example).
The approach you mentioned will works fine. Client will initiate a request with the phone number and a random id, server returns a verification token to the device. The token is one time use only with a set expiry. Then client will send the phone number, the random token used before and the validation token, which the server verifies. If valid, server sends a session token (or auth token) or similar which can be used for authentication. The session token can have a time out set from the server.
You did not mention if it's a web app or not. If it's a web app, you can set a https only session cookie from the server. Otherwise, you can store it locally in the app's local store. In usual case, apps cannot read private data belonging to other apps.
All communications must take place using HTTPS. Otherwise the whole scheme can get compromised via sniffing for traffic, because in the end you are using the auth token.

Can you explain how google authenticator / wireless tokens work?

I've been curious as to how google generates one time log in tokens on an iPhone app without comminicatig with the server when the token is Assigned. The token changes every ten seconds. How does google know what the right token is? I disabled data and it still works.
Thanks
it uses your unique key during setup as well as a special sequence/algorithm (that's part of the authenticator program (in your case, the iPhone .app)) to generate a special key. As part of the key-generating process, it also uses the current time on your iPhone to match up with the computer time you are logging in from.
remember a verification code, wait for the current code to expire, and continue logging into your google account on your computer with your previously memorized code. it will still work. try changing the time on your phone by 20 minutes off or something, and use a newly-generated code, it will not work.
it works similar to the HSBC security dongle keychain thinggy (for online banking) if you have one.
Google Authenticator generates OTPs based on the secret key. The secret key (seed) is 16 or 32 character alphanumeric code. In the process of token enrollment, the server generates the secret key and shares it with your phone via QR code (or you can enter it manually). For example, when TOTP algorithm is used, server and Google Authenticator know the seed and the current time and based on this information they generate the same one-time passwords (OTPs) at predetermined intervals. So the key elements are the secret key and time. Google Authenticator doesn’t require any internet connection or mobile network.

Network Security

Token cards display a number that changes periodically, perhaps every minute. Each such
device has a unique secret key. A human can prove possession of a particular such device by
entering the displayed number into a computer system. The computer system knows the
secret keys of each authorized device. How would you design such a device?
I believe this kind of authentication scheme is part of "two-factor authentication". In many popular 2FA solutions the user owns a small calculator-size device with a preconfigured PIN key. Upon entering the PIN, a One time password (OTP) is generated.
By entering the generated password, associated with his username, the user "proves" he has the device and knows the PIN code. Aladdin's safeword is such a device, popular in corporate/VPN/WifiPEAP environments.
It's also nowadays centralised and OTP are now often sent through SMS.
If you google around for the "How to implement two-factor authentication", you'll find numerous good articles. The topic is complex and involves many different technologies.
You can try this article for instance.
In the token device:
A stable clock with maximum deviation of 10s/year (can be done using quartz crystal oscillator), synchronized to UTC.
A public key stored individual to each device
Some random saltID, which also serves as the user identification value, so it should have reasonable length
A hash function
The number the token shows is generated by combining the saltID with the current time, hashing the value and encrypting it with the public key.
Upon login the authentication system reperforms the steps of the authentication token, minus of the public key encryption (i.e. it just computes the hash). The crypted hash is decrypted and compared to the calculated hash. If both match the token is accepted as valid.
The better authentication tokens have some numeric input, where the user can enter his PIN code, for protecting against loss or theft.

When working with most APIs, why do they require two types of authentication, namely a key and a secret?

I have been working with APIs and I've always wondered why you have to use a key and a secret?
Why do you need two types of authentication?
When a server receives an API call, it needs to know two things: Who is making the call, and whether or not the call is legitimate.
If you just had one item ("key"), and included it with every call, it would answer both questions. Based on the "key" the server knows who you are, and because only you know the key it proves that the call is actually coming from you. But including the key with every call is bad security practice: if someone can read even one of your messages in transit, your key is compromised, and someone can pretend to be you. So unless you're using HTTPS, this approach doesn't work.
Instead, you can include a digital signature with every call, signed with some "secret" number. (The "secret" number itself is not sent). If an attacker manages to read your message, they won't be able to figure out this "secret" number from the signature. (This is how digital signatures work: they are one-way).
But this doesn't solve the identification question: In the latter case, how does the server know who is making the call? It could try to verify the signature against the "secret" of every single user, but of course this would be very time-consuming.
So, here's what we do: Send both a "key" (that identifies the user), and a signature created using the "secret" number (that proves that the message is legitimate). The server looks up the user based on the key, and then validates the signature using that user's "secret" number.
This is a bit like when you write a check: It has an account number on it (to identify you) and your signature (to prove that you're you). Having just the account number wouldn't prove that you actually wrote the check. Having just the signature without the account number would force the bank to compare your check against all of its signatures for all of its accounts, which would obviously be inefficient.