I am a beginner web-developer and I have some doubts about the security of an API that I developed. It's a simple web-service that requires authentication in order to access/modify data.
I am wondering what are the best practices for authenticating users via HTTP.
Currently my app works like this:
User authenticates through an API request (POST) which requires the username and the password. The response contains info about the user and a TOKEN which will be used in the future for further requests.
My concerns: I don't know if the auth request should be POST. It sounds more like a GET, because POST should create something (at least this is the convention in Ruby on Rails). And then, even with POST or GET, the information is still "visible" during the transfer of the information. I heard something about HTTPS - how does that solves the problem?
The token is generated at user creation time - and remains the same in time. Is this bad? Should the token be generated again after a "logout"? I've seen APIs that use an API_KEY along a token for authentication. How does that work?
I have some GET requests to retrieve information about something. With this request I pass as an parameter the token retrieved from the authentication request. Is this ok? I mean that token is sensitive information.
Where can I find more information about these concerns of mine (book, article, w/e)?
HTTPs encrypts all traffic to your web site, and so would hide any get and post requests. It requires you to purchase an HTTPS certificate (which are cheap), and get a non-shared IP to host on (not so cheap). (If anyone talks about self signed certificates - well, it's possible, but ill advised if external people want to talk to your service).
Having a long lasting login token can be bad, it depends what sort of non-repudiation you want. If someone can log in 2 years ago, and continue using a token how do you know it's still the original requestor? Tokens should expire and have a way to re-request.
API keys generally work on a shared secret which is swapped out of band (by getting it from the hoster's web site generally). A custom authentication scheme and header is used, and must be calculated and checked for each request. This doesn't require HTTPS - the shared secret is used to generate the authentication header, but isn't sent with it, so the secret doesn't travel with each request. Of course you need to write this code, and figure out what you want the process to be. I'd generally avoid this unless you know what you're doing - you need to take a canonical representation of the request, sign it, then use that as the header. It's not complicated, but it's not simple either.
The problem with GET is more one of physical security than web security - I know that I log into sites regularly at work or at home in the company of others - I certainly don't want my credentials appended to the URL as a query string.
Using HTTPS (SSL) will secure your postdata as the information is encrypted before it is sent over the line. The encryption algortihm uses some quite clever maths in generating its decryption tokens to ensure that it's not susceptible to a man-in-the-middle attack.
Related
When building SPA style applications using frameworks like Angular, Ember, React, etc. what do people believe to be some best practices for authentication and session management? I can think of a couple of ways of considering approaching the problem.
Treat it no differently than authentication with a regular web application assuming the API and and UI have the same origin domain.
This would likely involve having a session cookie, server side session storage and probably some session API endpoint that the authenticated web UI can hit to get current user information to help with personalization or possibly even determining roles/abilities on the client side. The server would still enforce rules protecting access to data of course, the UI would just use this information to customize the experience.
Treat it like any third-party client using a public API and authenticate with some sort of token system similar to OAuth. This token mechanism would used by the client UI to authenticate each and every request made to the server API.
I'm not really much of an expert here but #1 seems to be completely sufficient for the vast majority of cases, but I'd really like to hear some more experienced opinions.
This question has been addressed, in a slightly different form, at length, here:
RESTful Authentication
But this addresses it from the server-side. Let's look at this from the client-side. Before we do that, though, there's an important prelude:
Javascript Crypto is Hopeless
Matasano's article on this is famous, but the lessons contained therein are pretty important:
https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2011/august/javascript-cryptography-considered-harmful/
To summarize:
A man-in-the-middle attack can trivially replace your crypto code with <script> function hash_algorithm(password){ lol_nope_send_it_to_me_instead(password); }</script>
A man-in-the-middle attack is trivial against a page that serves any resource over a non-SSL connection.
Once you have SSL, you're using real crypto anyways.
And to add a corollary of my own:
A successful XSS attack can result in an attacker executing code on your client's browser, even if you're using SSL - so even if you've got every hatch battened down, your browser crypto can still fail if your attacker finds a way to execute any javascript code on someone else's browser.
This renders a lot of RESTful authentication schemes impossible or silly if you're intending to use a JavaScript client. Let's look!
HTTP Basic Auth
First and foremost, HTTP Basic Auth. The simplest of schemes: simply pass a name and password with every request.
This, of course, absolutely requires SSL, because you're passing a Base64 (reversibly) encoded name and password with every request. Anybody listening on the line could extract username and password trivially. Most of the "Basic Auth is insecure" arguments come from a place of "Basic Auth over HTTP" which is an awful idea.
The browser provides baked-in HTTP Basic Auth support, but it is ugly as sin and you probably shouldn't use it for your app. The alternative, though, is to stash username and password in JavaScript.
This is the most RESTful solution. The server requires no knowledge of state whatsoever and authenticates every individual interaction with the user. Some REST enthusiasts (mostly strawmen) insist that maintaining any sort of state is heresy and will froth at the mouth if you think of any other authentication method. There are theoretical benefits to this sort of standards-compliance - it's supported by Apache out of the box - you could store your objects as files in folders protected by .htaccess files if your heart desired!
The problem? You are caching on the client-side a username and password. This gives evil.ru a better crack at it - even the most basic of XSS vulnerabilities could result in the client beaming his username and password to an evil server. You could try to alleviate this risk by hashing and salting the password, but remember: JavaScript Crypto is Hopeless. You could alleviate this risk by leaving it up to the Browser's Basic Auth support, but.. ugly as sin, as mentioned earlier.
HTTP Digest Auth
Is Digest authentication possible with jQuery?
A more "secure" auth, this is a request/response hash challenge. Except JavaScript Crypto is Hopeless, so it only works over SSL and you still have to cache the username and password on the client side, making it more complicated than HTTP Basic Auth but no more secure.
Query Authentication with Additional Signature Parameters.
Another more "secure" auth, where you encrypt your parameters with nonce and timing data (to protect against repeat and timing attacks) and send the. One of the best examples of this is the OAuth 1.0 protocol, which is, as far as I know, a pretty stonking way to implement authentication on a REST server.
https://www.rfc-editor.org/rfc/rfc5849
Oh, but there aren't any OAuth 1.0 clients for JavaScript. Why?
JavaScript Crypto is Hopeless, remember. JavaScript can't participate in OAuth 1.0 without SSL, and you still have to store the client's username and password locally - which puts this in the same category as Digest Auth - it's more complicated than HTTP Basic Auth but it's no more secure.
Token
The user sends a username and password, and in exchange gets a token that can be used to authenticate requests.
This is marginally more secure than HTTP Basic Auth, because as soon as the username/password transaction is complete you can discard the sensitive data. It's also less RESTful, as tokens constitute "state" and make the server implementation more complicated.
SSL Still
The rub though, is that you still have to send that initial username and password to get a token. Sensitive information still touches your compromisable JavaScript.
To protect your user's credentials, you still need to keep attackers out of your JavaScript, and you still need to send a username and password over the wire. SSL Required.
Token Expiry
It's common to enforce token policies like "hey, when this token has been around too long, discard it and make the user authenticate again." or "I'm pretty sure that the only IP address allowed to use this token is XXX.XXX.XXX.XXX". Many of these policies are pretty good ideas.
Firesheeping
However, using a token Without SSL is still vulnerable to an attack called 'sidejacking': http://codebutler.github.io/firesheep/
The attacker doesn't get your user's credentials, but they can still pretend to be your user, which can be pretty bad.
tl;dr: Sending unencrypted tokens over the wire means that attackers can easily nab those tokens and pretend to be your user. FireSheep is a program that makes this very easy.
A Separate, More Secure Zone
The larger the application that you're running, the harder it is to absolutely ensure that they won't be able to inject some code that changes how you process sensitive data. Do you absolutely trust your CDN? Your advertisers? Your own code base?
Common for credit card details and less common for username and password - some implementers keep 'sensitive data entry' on a separate page from the rest of their application, a page that can be tightly controlled and locked down as best as possible, preferably one that is difficult to phish users with.
Cookie (just means Token)
It is possible (and common) to put the authentication token in a cookie. This doesn't change any of the properties of auth with the token, it's more of a convenience thing. All of the previous arguments still apply.
Session (still just means Token)
Session Auth is just Token authentication, but with a few differences that make it seem like a slightly different thing:
Users start with an unauthenticated token.
The backend maintains a 'state' object that is tied to a user's token.
The token is provided in a cookie.
The application environment abstracts the details away from you.
Aside from that, though, it's no different from Token Auth, really.
This wanders even further from a RESTful implementation - with state objects you're going further and further down the path of plain ol' RPC on a stateful server.
OAuth 2.0
OAuth 2.0 looks at the problem of "How does Software A give Software B access to User X's data without Software B having access to User X's login credentials."
The implementation is very much just a standard way for a user to get a token, and then for a third party service to go "yep, this user and this token match, and you can get some of their data from us now."
Fundamentally, though, OAuth 2.0 is just a token protocol. It exhibits the same properties as other token protocols - you still need SSL to protect those tokens - it just changes up how those tokens are generated.
There are two ways that OAuth 2.0 can help you:
Providing Authentication/Information to Others
Getting Authentication/Information from Others
But when it comes down to it, you're just... using tokens.
Back to your question
So, the question that you're asking is "should I store my token in a cookie and have my environment's automatic session management take care of the details, or should I store my token in Javascript and handle those details myself?"
And the answer is: do whatever makes you happy.
The thing about automatic session management, though, is that there's a lot of magic happening behind the scenes for you. Often it's nicer to be in control of those details yourself.
I am 21 so SSL is yes
The other answer is: Use https for everything or brigands will steal your users' passwords and tokens.
You can increase security in authentication process by using JWT (JSON Web Tokens) and SSL/HTTPS.
The Basic Auth / Session ID can be stolen via:
MITM attack (Man-In-The-Middle) - without SSL/HTTPS
An intruder gaining access to a user's computer
XSS
By using JWT you're encrypting the user's authentication details and storing in the client, and sending it along with every request to the API, where the server/API validates the token. It can't be decrypted/read without the private key (which the server/API stores secretly) Read update.
The new (more secure) flow would be:
Login
User logs in and sends login credentials to API (over SSL/HTTPS)
API receives login credentials
If valid:
Register a new session in the database Read update
Encrypt User ID, Session ID, IP address, timestamp, etc. in a JWT with a private key.
API sends the JWT token back to the client (over SSL/HTTPS)
Client receives the JWT token and stores in localStorage/cookie
Every request to API
User sends a HTTP request to API (over SSL/HTTPS) with the stored JWT token in the HTTP header
API reads HTTP header and decrypts JWT token with its private key
API validates the JWT token, matches the IP address from the HTTP request with the one in the JWT token and checks if session has expired
If valid:
Return response with requested content
If invalid:
Throw exception (403 / 401)
Flag intrusion in the system
Send a warning email to the user.
Updated 30.07.15:
JWT payload/claims can actually be read without the private key (secret) and it's not secure to store it in localStorage. I'm sorry about these false statements. However they seem to be working on a JWE standard (JSON Web Encryption).
I implemented this by storing claims (userID, exp) in a JWT, signed it with a private key (secret) the API/backend only knows about and stored it as a secure HttpOnly cookie on the client. That way it cannot be read via XSS and cannot be manipulated, otherwise the JWT fails signature verification. Also by using a secure HttpOnly cookie, you're making sure that the cookie is sent only via HTTP requests (not accessible to script) and only sent via secure connection (HTTPS).
Updated 17.07.16:
JWTs are by nature stateless. That means they invalidate/expire themselves. By adding the SessionID in the token's claims you're making it stateful, because its validity doesn't now only depend on signature verification and expiry date, it also depends on the session state on the server. However the upside is you can invalidate tokens/sessions easily, which you couldn't before with stateless JWTs.
I would go for the second, the token system.
Did you know about ember-auth or ember-simple-auth? They both use the token based system, like ember-simple-auth states:
A lightweight and unobtrusive library for implementing token based
authentication in Ember.js applications.
http://ember-simple-auth.simplabs.com
They have session management, and are easy to plug into existing projects too.
There is also an Ember App Kit example version of ember-simple-auth: Working example of ember-app-kit using ember-simple-auth for OAuth2 authentication.
On user auth success my auth server generates a token and passes it to the client.
The docs say that the client has to add the following headers:
X-Auth-CouchDB-UserName: username;
X-Auth-CouchDB-Roles:comma-separated (,) list of user roles;
X-Auth-CouchDB-Token: authentication token.
Does it mean that the client defines his own roles on every request? Why can't he add 'admin' into the list of roles then?
A client is anything that uses or requests resources from a server.
"The client" in this case is your proxy/auth server, not a web browser. (The documentation could probably stand to be clarified a bit.)
So yes, your proxy/auth server, the client to CouchDB, should set that header as appropriate.
By extension, it should also not pass through any X-Auth-Couch headers received from its client (presumably a web browser).
Good observation. Using the JWT Authentication would seem to close this loophole as my understanding is the entire token is signed-over.
That said, in neither case can one avoid:
having to fully trust the entity holding the secret
having to carefully guard against leakage of its headers
The former is sort of unavoidable, as the point of these plugins is to delegate authentication. Either you trust the proxy (or JWT issuer) or you leave those authentication_handlers disabled.
The latter is something that e.g. OAuth 1 hardened itself somewhat against, in that ± the entire request was signed over and one couldn't simply take a couple auth headers from an earlier leaked request and slap them on a new forged request. Nonce and timestamp fields were supposed to be checked to avoid verbatim replays of prior requests as well. (All this was dropped in OAuth 2 for whatever that's worth… and even OAuth 1 had some notable loopholes in practice…)
So in practice either the Proxy or the JWT authentication handlers should be used with care. Assuming a "firewall" of sorts drawn around both your CouchDB and authentication source, then as #Flimzy's answer mentions preventing unexpected headers making their way inwards from outside that boundary — as well as keeping the real headers from leaking outwards — should mitigate most potential abuse.
Currently I have a server that makes use of JSON Web Tokens in order to manage user authentication.
When a user submits a valid authentication request, they receive a JWT which contains information such as their user ID that allows the server to process authentication sensitive requests.
The plain text body of the JWT will take a form similar to:
{
"UserID":"100",
"Expires":"(expiryDate)",
"IssuedBy":"ServerOne"
}
However, as you could quite correctly point out, this does not afford us any way to validate that the agent presenting the access token is in fact the user who it was originally issued to.
As such, if someone were able to obtain the JWT, then assuming it has not expired they would be able to assume the user's identity and gain unauthorised access.
A stateless server is a must for the current project, so I would struggle to use a solution which involves the server keeping track of logged in users.
Is it possible to achieve security using Tokens in this manner?
I've done a bit of reading on the topic, and I remember someone suggesting to use something like the "X-Forwarded-For" header and placing the user's IP in the token then comparing the two, but that has it's own drawbacks.
There are a couple of specs in the OAuth 2.0 / JWT space right now, which try to address this problem. Take a look at RFC 7800 and this draft.
There are several commercial Authorization Servers / Identity Providers out there which already support RFC 7800 and a couple of open source implementations as well.
I am trying to implement stateless authentication with JWT for my RESTful APIs.
AFAIK, JWT is basically an encrypted string passed as HTTP headers during a REST call.
But what if there's an eavesdropper who see the request and steals the token? Then he will be able to fake request with my identity?
Actually, this concern applies to all token-based authentication.
How to prevent that? A secure channel like HTTPS?
I'm the author of a node library that handles authentication in quite some depth, express-stormpath, so I'll chime in with some information here.
First off, JWTs are typically NOT encrypted. While there is a way to encrypt JWTs (see: JWEs), this is not very common in practice for many reasons.
Next up, any form of authentication (using JWTs or not), is subject to MitM attacks (man-in-the-middle) attacks. These attacks happen when an attacker can VIEW YOUR NETWORK traffic as you make requests over the internet. This is what your ISP can see, the NSA, etc.
This is what SSL helps prevent against: by encrypting your NETWORK traffic from your computer -> some server when authenticating, a third party who is monitoring your network traffic can NOT see your tokens, passwords, or anything like that unless they're somehow able to get a copy of the server's private SSL key (unlikely). This is the reason SSL is MANDATORY for all forms of authentication.
Let's say, however, that someone is able to exploit your SSL and is able to view your token: the answer to your question is that YES, the attacker will be able to use that token to impersonate you and make requests to your server.
Now, this is where protocols come in.
JWTs are just one standard for an authentication token. They can be used for pretty much anything. The reason JWTs are sort of cool is that you can embed extra information in them, and you can validate that nobody has messed with it (signing).
HOWEVER, JWTs themselves have nothing to do with 'security'. For all intents and purposes, JWTs are more or less the same thing as API keys: just random strings that you use to authenticate against some server somewhere.
What makes your question more interesting is the protocol being used (most likely OAuth2).
The way OAuth2 works is that it was designed to give clients TEMPORARY tokens (like JWTs!) for authentication for a SHORT PERIOD OF TIME ONLY!
The idea is that if your token gets stolen, the attacker can only use it for a short period of time.
With OAuth2, you have to re-authenticate yourself with the server every so often by supplying your username/password OR API credentials and then getting a token back in exchange.
Because this process happens every now and then, your tokens will frequently change, making it harder for attackers to constantly impersonate you without going through great trouble.
Hopefully this helps ^^
I know this is an old question but I think I can drop my $0.50 here, probably someone can improve or provide an argument to totally decline my approach.
I'm using JWTs in a RESTful API over HTTPS (ofc).
For this to work, you should always issue short-lived tokens (depends on most cases, in my app I'm actually setting the exp claim to 30 minutes, and ttl to 3 days, so you can refresh this token as long as its ttl is still valid and the token has not been blacklisted)
For the authentication service, in order to invalidate tokens, I like to use an in-memory cache layer (redis in my case) as a JWT blacklist/ban-list in front, depending on some criterias:
(I know it breaks the RESTful philosophy, but the stored documents are really short-lived, as I blacklist for their remaining time-to-live -ttl claim-)
Note: blacklisted tokens can't be automatically refreshed
If user.password or user.email has been updated (requires password confirmation), auth service returns a refreshed token and invalidates (blacklist) previous one(s), so if your client detects that user's identity has been compromised somehow, you can ask that user to change its password.
If you don't want to use the blacklist for it, you can (but I don't encourage you to) validate the iat (issued at) claim against user.updated_at field (if jwt.iat < user.updated_at then JWT is not valid).
User deliberately logged out.
Finally you validate the token normally as everybody does.
Note 2: instead of using the token itself (which is really long) as the cache's key, I suggest generating and using a UUID token for the jti claim. Which is good and I think (not sure since it just came up in my mind) you can use this same UUID as the CSRF token as well, by returning a secure / non-http-only cookie with it and properly implementing the X-XSRF-TOKEN header using js. This way you avoid the computing work of creating yet another token for CSRF checks.
Sorry being a little late on this, but had the similar concerns and now want to contribute something on the same.
1) rdegges added an excellent point, that JWT has nothing to do with the "security" and simply validates, if anyone has messed up with the payload or not(signing); ssl helps to prevent against breaches.
2) Now, if ssl is also somehow compromised, any eavesdropper can steal our bearer token (JWT) and impersonate the genuine user, a next level step what can be done is, to seek the "proof of possession" of JWT from the client.
3) Now, with this approach, presenter of the JWT possess a particular Proof-Of-Possession(POP) key, which the recipient can cryptographically confirm whether the request is from the same authentic user or not.
I referred Proof of Possesion article for this and am convinced with the apporach.
I will be delighted, if able to contribute anything.
Cheers (y)
To deal with the problem that tokens are getting stolen, you map each JWT with the list of valid IPs.
For eg, when the user logs in with a particular IP when you can add that IP as valid IP for that JWT, and when you get the request pf this JWT from another IP (either the user changed the internet or JWT is stolen, or any reason) you can do the following depending on you use case:
Map CSRF token with user token and incase it gets stolen then it's CSRF token will not match in that you can invalidate that user token.
You can provide a captcha to the user to validate if he is a valid user or not. If he enters the captcha then add that IP to the valid list of that JWT.
You can log out the user and make a new request to log in again.
You can alert the user that your IP has changed or requested from a different location.
You can also use cache with a small expiry of 5 mins in above use-cases instead of checking each and every time.
Suggest if it can be improved.
Can't we just add the ip of the initial host which has requested to generate this JWT token as part of the claim ? Now when the JWT is stolen and used from a different machine, when the server validates this token, we could verify if the requested machine ip matches with the one set as part of the claim. This would not match and hence the token can be rejected. Also if the user tries manipulate the token by setting his own ip to the token, the token would be rejected as the token is altered.
Once the token gets stolen - it is game over.
However there is a way to make it harder to make use of a stolen token.
Check https://cheatsheetseries.owasp.org/cheatsheets/JSON_Web_Token_for_Java_Cheat_Sheet.html#token-sidejacking for reference.
Basically, you create a x-Byte long fingerprint in hexadezimal, store its raw value in the token - hash the fingerprint using for example SHA-512 and put the hashed fingerprint inside a httponly secure cookie.
Now instead of validating just the signature and expired date of the token you need to also validate the existence of the cookie and be sure that the raw fingerprint values match.
Client should use part of the hash of user password to encrypt the time that the http msg was sent by client to the server. This part of the hash should also be encrypted with some server secret key into the token when it is created.
The server than can decrypt the http request time and verify for short time delay.
The token is going to change every request.
I'm developing a REST API that requires authentication. Because the authentication itself occurs via an external webservice over HTTP, I reasoned that we would dispense tokens to avoid repeatedly calling the authentication service. Which brings me neatly to my first question:
Is this really any better than just requiring clients to use HTTP Basic Auth on each request and caching calls to the authentication service server-side?
The Basic Auth solution has the advantage of not requiring a full round-trip to the server before requests for content can begin. Tokens can potentially be more flexible in scope (i.e. only grant rights to particular resources or actions), but that seems more appropriate to the OAuth context than my simpler use case.
Currently tokens are acquired like this:
curl -X POST localhost/token --data "api_key=81169d80...
&verifier=2f5ae51a...
×tamp=1234567
&user=foo
&pass=bar"
The api_key, timestamp and verifier are required by all requests. The "verifier" is returned by:
sha1(timestamp + api_key + shared_secret)
My intention is to only allow calls from known parties, and to prevent calls from being reused verbatim.
Is this good enough? Underkill? Overkill?
With a token in hand, clients can acquire resources:
curl localhost/posts?api_key=81169d80...
&verifier=81169d80...
&token=9fUyas64...
×tamp=1234567
For the simplest call possible, this seems kind of horribly verbose. Considering the shared_secret will wind up being embedded in (at minimum) an iOS application, from which I would assume it can be extracted, is this even offering anything beyond a false sense of security?
Let me seperate up everything and solve approach each problem in isolation:
Authentication
For authentication, baseauth has the advantage that it is a mature solution on the protocol level. This means a lot of "might crop up later" problems are already solved for you. For example, with BaseAuth, user agents know the password is a password so they don't cache it.
Auth server load
If you dispense a token to the user instead of caching the authentication on your server, you are still doing the same thing: Caching authentication information. The only difference is that you are turning the responsibility for the caching to the user. This seems like unnecessary labor for the user with no gains, so I recommend to handle this transparently on your server as you suggested.
Transmission Security
If can use an SSL connection, that's all there is to it, the connection is secure*. To prevent accidental multiple execution, you can filter multiple urls or ask users to include a random component ("nonce") in the URL.
url = username:key#myhost.com/api/call/nonce
If that is not possible, and the transmitted information is not secret, I recommend securing the request with a hash, as you suggested in the token approach. Since the hash provides the security, you could instruct your users to provide the hash as the baseauth password. For improved robustness, I recommend using a random string instead of the timestamp as a "nonce" to prevent replay attacks (two legit requests could be made during the same second). Instead of providing seperate "shared secret" and "api key" fields, you can simply use the api key as shared secret, and then use a salt that doesn't change to prevent rainbow table attacks. The username field seems like a good place to put the nonce too, since it is part of the auth. So now you have a clean call like this:
nonce = generate_secure_password(length: 16);
one_time_key = nonce + '-' + sha1(nonce+salt+shared_key);
url = username:one_time_key#myhost.com/api/call
It is true that this is a bit laborious. This is because you aren't using a protocol level solution (like SSL). So it might be a good idea to provide some kind of SDK to users so at least they don't have to go through it themselves. If you need to do it this way, I find the security level appropriate (just-right-kill).
Secure secret storage
It depends who you are trying to thwart. If you are preventing people with access to the user's phone from using your REST service in the user's name, then it would be a good idea to find some kind of keyring API on the target OS and have the SDK (or the implementor) store the key there. If that's not possible, you can at least make it a bit harder to get the secret by encrypting it, and storing the encrypted data and the encryption key in seperate places.
If you are trying to keep other software vendors from getting your API key to prevent the development of alternate clients, only the encrypt-and-store-seperately approach almost works. This is whitebox crypto, and to date, no one has come up with a truly secure solution to problems of this class. The least you can do is still issue a single key for each user so you can ban abused keys.
(*) EDIT: SSL connections should no longer be considered secure without taking additional steps to verify them.
A pure RESTful API should use the underlying protocol standard features:
For HTTP, the RESTful API should comply with existing HTTP standard headers. Adding a new HTTP header violates the REST principles. Do not re-invent the wheel, use all the standard features in HTTP/1.1 standards - including status response codes, headers, and so on. RESTFul web services should leverage and rely upon the HTTP standards.
RESTful services MUST be STATELESS. Any tricks, such as token based authentication that attempts to remember the state of previous REST requests on the server violates the REST principles. Again, this is a MUST; that is, if you web server saves any request/response context related information on the server in attempt to establish any sort of session on the server, then your web service is NOT Stateless. And if it is NOT stateless it is NOT RESTFul.
Bottom-line: For authentication/authorization purposes you should use HTTP standard authorization header. That is, you should add the HTTP authorization / authentication header in each subsequent request that needs to be authenticated. The REST API should follow the HTTP Authentication Scheme standards.The specifics of how this header should be formatted are defined in the RFC 2616 HTTP 1.1 standards – section 14.8 Authorization of RFC 2616, and in the RFC 2617 HTTP Authentication: Basic and Digest Access Authentication.
I have developed a RESTful service for the Cisco Prime Performance Manager application. Search Google for the REST API document that I wrote for that application for more details about RESTFul API compliance here. In that implementation, I have chosen to use HTTP "Basic" Authorization scheme. - check out version 1.5 or above of that REST API document, and search for authorization in the document.
In the web a stateful protocol is based on having a temporary token that is exchanged between a browser and a server (via cookie header or URI rewriting) on every request. That token is usually created on the server end, and it is a piece of opaque data that has a certain time-to-live, and it has the sole purpose of identifying a specific web user agent. That is, the token is temporary, and becomes a STATE that the web server has to maintain on behalf of a client user agent during the duration of that conversation. Therefore, the communication using a token in this way is STATEFUL. And if the conversation between client and server is STATEFUL it is not RESTful.
The username/password (sent on the Authorization header) is usually persisted on the database with the intent of identifying a user. Sometimes the user could mean another application; however, the username/password is NEVER intended to identify a specific web client user agent. The conversation between a web agent and server based on using the username/password in the Authorization header (following the HTTP Basic Authorization) is STATELESS because the web server front-end is not creating or maintaining any STATE information whatsoever on behalf of a specific web client user agent. And based on my understanding of REST, the protocol states clearly that the conversation between clients and server should be STATELESS. Therefore, if we want to have a true RESTful service we should use username/password (Refer to RFC mentioned in my previous post) in the Authorization header for every single call, NOT a sension kind of token (e.g. Session tokens created in web servers, OAuth tokens created in authorization servers, and so on).
I understand that several called REST providers are using tokens like OAuth1 or OAuth2 accept-tokens to be be passed as "Authorization: Bearer " in HTTP headers. However, it appears to me that using those tokens for RESTful services would violate the true STATELESS meaning that REST embraces; because those tokens are temporary piece of data created/maintained on the server side to identify a specific web client user agent for the valid duration of a that web client/server conversation. Therefore, any service that is using those OAuth1/2 tokens should not be called REST if we want to stick to the TRUE meaning of a STATELESS protocol.
Rubens