How to authenticate an application, instead of a user? - wcf

In the context of WCF/Web Services/WS-Trust federated security, what are the generally accepted ways to authenticate an application, rather than a user? From what I gather, it seems like certificate authentication would be the way to go, IE generate a certificate specifically for the application. Am I on the right track here? Are there other alternatives to consider?

What you are trying to do is solve the general Digital Rights Management problem, which is an unsolved problem at the moment.
There are a whole host of options for remote attestation that involve trying to hide secrets of some sort (traditional secret keys, or semi-secret behavioural characteristics).
Some simple examples that might deter casual users of your API from working around it:
Include &officialclient=yes in the request
Include &appkey=<some big random key> in the request
Store a secret with the app and use a simple challenge/response: send a random nonce to the app and the app returns HMAC(secret,nonce))
In general however the 'defenders advantage' is quite small - however much effort you put in to try and authenticate that the bit of software talking to you is in fact your software, it isn't going to take your attacker/user much more effort to emulate it. (To break the third example I gave, you don't even need to reverse engineer the official client - the user can just hook up the official client to answer the challenges their own client receives.)
The more robust avenue you can pursue is licencing / legal options. A famous example would be Twitter, who prevent you from knocking up any old client through their API licence terms and conditions - if you created your own (popular) client that pretended to the Twitter API to be the official Twitter client, the assumption is their lawyers would come a-knocking.

If the application is under your control (e.g. your server) then by all means use a certificate.
If this is an application under a user control (desktop) then there is no real way to authenticate the app in a strong way. Even if you use certificate a user can extract it and send messages outside the context of that application.
If this is not a critical secure system you could do something good enough like embedding the certificate inside the application resources. But remember once the application is physically on the user machine every secret inside it can sooner or later be revealed.

Related

What is the accurate approach to achieve application authentication

Building a multi-tenant application. The tenancy of the application is organizations and each organization can have multiple projects.
As user authentication, implemented OAuth based authentication and role-based authorization.
Example
Let's assume our service serves images to the user applications and naming it imagebox server.
Now a user can register with our application. First, he needs to create an organization and then he can create multiple projects in an organization.
In a project, he can create an image with the following properties
Name of the image
Description
Upload an image ( It will automatically create a URL)
The customer on imagebox server, will create App Keys. Though we need only one app key per project. Though, he should be able to create multiple app keys as well delete the previous one whenever needed.
Now customer of our image box, download the SDK from the imagebox server. They write their application over the SDK and pass the APP key.
ImageBoxClient client = ImageBoxClientBuilder.build(APP_KEY);
The client should be able to access images of the project corresponding to the API key.
Questions
Is there any difference between API keys and App Keys?
How to handle Authorization - How to restrict only N:1 mapping between API key and project, a project can have multiple API keys but a key should belong to only one project?
How to achieve authorization in the above case (restrict key to the project), if I intended to use identity management solution like Azure AD?
How to achieve authorization in the above case (restrict key to the project), using API gateway?
Assumption
From your question its not clear if the client SDK's are mobile applications or not.
My answer will be based on the assumption that they are SDKs for mobile applications, but even doing so, a lot of what I will say it's still applicable for SDKs that aren't for mobile applications.
The Difference Between WHO and WHAT is Accessing the API Backend
As user authentication, implemented OAuth based authentication and role-based authorization.
User Authentication will identify WHO is accessing the API backend, but will not identify WHAT is accessing it on its behalf, and you are thinking in using an API key to identify the WHAT part.
So to better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
The Intended Communication Channel represents the mobile app being used as you expected, by a legit user without any malicious intentions, using a genuine version of the mobile app, and communicating directly with the API backend without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of the mobile app, a hacker using the genuine version of the mobile app, while man in the middle attacking it, to understand how the communication between the mobile app and the API backend is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate all possible ones here.
So the WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the mobile app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise you may end up discovering that the WHAT may be one of your legit users using a repackaged version of the mobile app or an automated script that is trying to gamify and take advantage of the service provided by the mobile application.
API Keys
For a project, there will be a unique app key using which SDK authenticate and identify the project.
So I think you want to say here that you will use an app-key or api-key header to identify WHAT client SDK is accessing the API backend in a per project basis, and this seems to be reinforced by:
Is the API key is the right solution, then how to restrict only N:1 mapping between API key and project, a project can have multiple API keys but a key should belong to only one project.
In my understanding the client SDK will need a specific API key per project, thus if it needs access to more then one project it will need to be released with one API key per project it needs access to.
While this may be useful to filter down and identify access to each project, I want to call your attention that it cannot be trusted as the only way to restrict access to them, and I hope that by now it's easy to see why, because you have already read about WHO vs WHAT is accessing the API backend? If not please go back and read about it.
Anything that its shipped into a client SDK must be considered public, and if you have shipped secrets with it, you need to considered that they are now compromised, because an SDK can be reverse engineered by de-compile it, or a MitM attacks can be performed to understand how it communicates with the backend, and instrumentation frameworks can be used during run-time to hook on the running code, and change its behavior, and/or extract data.
Extract an API key with Reverse Engineering
You can learn how to do it by reading my article How to Extract an API key from a Mobile App by Static Binary Analysis, and see how easy it can be done.
Using MobSF to reverse engineer an APK for a mobile app allows us to quickly extract an API key and also gives us a huge amount of information we can use to perform further analysis that may reveal more attack vectors into the mobile app and API server. It is not uncommon to also find secrets for accessing third part services among this info or in the decompiled source code that is available to download in smali and java formats.
The above article have a companion repository, that shows one of the most effective ways of hiding a secret in an mobile SDK, by using the JNI/NDK:
JNI is the Java Native Interface. It defines a way for the bytecode that Android compiles from managed code (written in the Java or Kotlin programming languages) to interact with native code (written in C/C++). JNI is vendor-neutral, has support for loading code from dynamic shared libraries, and while cumbersome at times is reasonably efficient.
Extract an API key with a MitM Attack
In the article Steal that API key with a Man in the Middle Attack I show an easier way to extract an API key, or any other secret, from a mobile app, and performing the MitM attack is made easy by using a tool like MiTM Proxy:
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
Instrumentation Frameworks
A more advanced technique is to use instrumentation frameworks, like Frida or xPosed to hook into code at run-time, and change its behavior or extract the secrets from it.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
So if your are storing secrets in the keys-tore of your device, an attacker just needs to poke around the code until he discovers the method that retrieves the secret from the vault, hook on it at runt-time and extract the secret without changing behavior, therefore staying under the radar. The extract secrets can then be sent to a remote control server, that will launch automated attacks against the API, just like if it was the WHO and the WHAT the API is expecting.
Defending an API Server
Depending on your budget and resources you may employ an array of different approaches and techniques to defend your API server, and I will start to enumerate some of the most usual ones, but before I do it so I would like to leave this note:
As a best practice a mobile app or a web app should only communicate with an API server that is under your control and any access to third party APIs services must be done by this same API server you control. This way you limit the attack surface to only one place, where you will employ as many layers of defense as what you are protecting is worth.
You can start with reCaptcha V3, followed by Web Application Firewall(WAF) and finally if you can afford it a User Behavior Analytics(UBA) solution.
Google reCAPTCHA V3:
reCAPTCHA is a free service that protects your website from spam and abuse. reCAPTCHA uses an advanced risk analysis engine and adaptive challenges to keep automated software from engaging in abusive activities on your site. It does this while letting your valid users pass through with ease.
...helps you detect abusive traffic on your website without any user friction. It returns a score based on the interactions with your website and provides you more flexibility to take appropriate actions.
WAF - Web Application Firewall:
A web application firewall (or WAF) filters, monitors, and blocks HTTP traffic to and from a web application. A WAF is differentiated from a regular firewall in that a WAF is able to filter the content of specific web applications while regular firewalls serve as a safety gate between servers. By inspecting HTTP traffic, it can prevent attacks stemming from web application security flaws, such as SQL injection, cross-site scripting (XSS), file inclusion, and security misconfigurations.
UBA - User Behavior Analytics:
User behavior analytics (UBA) as defined by Gartner is a cybersecurity process about detection of insider threats, targeted attacks, and financial fraud. UBA solutions look at patterns of human behavior, and then apply algorithms and statistical analysis to detect meaningful anomalies from those patterns—anomalies that indicate potential threats. Instead of tracking devices or security events, UBA tracks a system's users. Big data platforms like Apache Hadoop are increasing UBA functionality by allowing them to analyze petabytes worth of data to detect insider threats and advanced persistent threats.
All this solutions work based on a negative identification model, by other words they try their best to differentiate the bad from the good by identifying what is bad, not what is good, thus they are prone to false positives, despite of the advanced technology used by some of them, like machine learning and artificial intelligence.
So you may find yourself more often than not in having to relax how you block the access to the API server in order to not affect the good users. This also means that this solutions require constant monitoring to validate that the false positives are not blocking your legit users and that at same time they are properly keeping at bay the unauthorized ones.
Regarding APIs serving mobile apps a positive identification model can be used by using a Mobile App Attestation solution that allows the API server to verify with a very high degree of accuracy, that the requests are coming from WHO and WHAT is expected.
Conclusion
My recommendation is to use defense in depth, where you use as much techniques as you can afford and in accordance with the value of what you are trying to protect and the legal requirements for that type of data, like the GDPR regulations in Europe.The end goal is to increase the expertise and time necessary to overcome all defenses, to the point that the attacker may give up and move on to easier targets.
So defending an API server is a daunting task, and and never ended battle.

How to secure API Key with Nuxt and verify

I am using Nuxt (with SSR/ PWA/ Vuejs/ Node.js/ Vuex/ Firestore) and would like to have a general idea or have an example for the following:
How can I secure an API key. For example to call MailChimp API
I am not familiar with how a hacker would see this if a poor solution is implemented. How can I verify it is not accessible to them?
I have found a number of "solutions" that recommend using environment Variables, but for every solution someone indicates it wont be secure. See:
https://github.com/nuxt-community/dotenv-module/issues/7
https://github.com/nuxt/nuxt.js/issues/2033
Perhaps server middleware is the answer? https://blog.lichter.io/posts/sending-emails-through-nuxtjs and https://www.youtube.com/watch?v=j-3RwvWZoaU (#11:30). I just need to add an email to a mail chimp account once entered, seems like a lot of overhead.
Also I see I store my Firestore api key as an environment variable already. Is this secure? When I open chrome dev tools-> sources-> page-> app.js i can see the api key right there (only tested in dev mode)!
You could use either a server middleware or https://github.com/nuxt-community/separate-env-module
Middleware itself wont work because it can be executed on client too, and code that is used in middleware will be available on client
For #2 you can check whether its included in client js sources. There way more other way hacker to get anything e.g. xss, but its general things and not much related to your code.
How can I secure an API key. For example to call MailChimp API
The cruel truth here is NO... In the client side you cannot secure any kind of secret, at least in a web app.
Just for you to have an idea of the techniques that can be used to protect an API and how they can be bypassed you can read this series of articles. While it is in the context of an Api serving a mobile app, the majority of it also applies for an API serving a web app. You will learn how api-keys, ouath tokens, hmac and certificate pinning can be used and bypassed.
Access to third part services must be always done in the back-end, never on the client side. With this approach you only have one place to protected, that is under your control.
For example in your case of accessing the Mailchimp API... If your back-end is the one in charge of doing it in behalf of your web app, then you can put security measures in place to detect and mitigate the usage of Mailchimp by your web app, like a User Behaviour Analytics (UBA) solution, but leaving for the web app the access to the Mailchimp API means that you only know that someone is abusing it when Mailchimp alerts your or you see it in their dashboards.
I am not familiar with how a hacker would see this if a poor solution is implemented. How can I verify it is not accessible to them?
As you may already know F12 to access the developers tools is one of the ways.
Another ways id to use the OWASP security tool Zed Attack Proxy (ZAP) , and using their words:
The OWASP Zed Attack Proxy (ZAP) is one of the world’s most popular free security tools and is actively maintained by hundreds of international volunteers*. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your applications. Its also a great tool for experienced pentesters to use for manual security testing.
Storing secrets in the front-end is a big no no in terms of security.
If your website is using server-side rendering (aka SSG or static website) and is hosted on Netlify it sound like a perfect job for the Netlify functions (server side logic) and environnement variables.
You can find some documentations here : Netlify functions.
Netlify functions are powered by AWS Lambda.
You would typically create a function folder into your project directory and write your functions there. Functions are built after each deploy but you can test your functions locally with Netlify Dev
Here is an example of function using Mailchimp service wit injected secrets :
https://github.com/tobilg/netlify-functions-landingpage/blob/169de175d04b165b5d4801b09cb250cd9a740da5/src/lambda/signup.js
I think privateRuntimeConfig, by which secrets are only available on the server side is another workable solution here, if you're in a situation where you only need to access an API during Server Side Rendering.
https://nuxtjs.org/tutorials/moving-from-nuxtjs-dotenv-to-runtime-config/#misconceptions:~:text=privateRuntimeConfig%20should%20hold%20all%20env%20variables%20that%20are%20private%20and%20that%20should%20not%20be%20exposed%20on%20the%20frontend.%20This%20could%20include%20a%20reference%20to%20your%20API%20secret%20tokens%20for%20example.

Where to store client secret for mobile app? [duplicate]

When using the OAuth protocol, you need a secret string obtained from the service you want to delegate to. If you are doing this in a web app, you can simply store the secret in your data base or on the file system, but what is the best way to handle it in a mobile app (or a desktop app for that matter)?
Storing the string in the app is obviously not good, as someone could easily find it and abuse it.
Another approach would be to store it on your server, and have the app fetch it on every run, never storing it on the phone. This is almost as bad, because you have to include the URL in the app.
The only workable solution I can come up with is to first obtain the Access Token as normal (preferably using a web view inside the app), and then route all further communication through our server, which would append the secret to the request data and communicate with the provider. Then again, I'm a security noob, so I'd really like to hear some knowledgeable peoples' opinions on this. It doesn't seem to me that most apps are going to these lengths to guarantee security (for example, Facebook Connect seems to assume that you put the secret into a string right in your app).
Another thing: I don't believe the secret is involved in initially requesting the Access Token, so that could be done without involving our own server. Am I correct?
Yes, this is an issue with the OAuth design that we are facing ourselves. We opted to proxy all calls through our own server. OAuth wasn't entirely flushed out in respect of desktop apps. There is no prefect solution to the issue that I've found without changing OAuth.
If you think about it and ask the question why we have secrets, is mostly for provision and disabling apps. If our secret is compromised, then the provider can only really revoke the entire app. Since we have to embed our secret in the desktop app, we are sorta screwed.
The solution is to have a different secret for each desktop app. OAuth doesn't make this concept easy. One way is have the user go and create an secret on their own and enter the key on their own into your desktop app (some facebook apps did something similar for a long time, having the user go and create facebook to setup their custom quizes and crap). It's not a great experience for the user.
I'm working on proposal for a delegation system for OAuth. The concept is that using our own secret key we get from our provider, we could issue our own delegated secret to our own desktop clients (one for each desktop app basically) and then during the auth process we send that key over to the top level provider that calls back to us and re-validates with us. That way we can revoke on own secrets we issue to each desktop client. (Borrowing a lot of how this works from SSL). This entire system would be prefect for value-add webservices as well that pass on calls to a third party webservice.
The process could also be done without delegation verification callbacks if the top level provider provides an API to generate and revoke new delegated secrets. Facebook is doing something similar by allowing facebook apps to allow users to create sub-apps.
There are some talks about the issue online:
http://blog.atebits.com/2009/02/fixing-oauth/
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/629b03475a3d78a1/de1071bf4b820c14#de1071bf4b820c14
Twitter and Yammer's solution is a authentication pin solution:
https://dev.twitter.com/oauth/pin-based
https://www.yammer.com/api_oauth_security_addendum.html
With OAUth 2.0, you can store the secret on the server. Use the server to acquire an access token that you then move to the app and you can make calls from the app to the resource directly.
With OAuth 1.0 (Twitter), the secret is required to make API calls. Proxying calls through the server is the only way to ensure the secret is not compromised.
Both require some mechanism that your server component knows it is your client calling it. This tends to be done on installation and using a platform specific mechanism to get an app id of some kind in the call to your server.
(I am the editor of the OAuth 2.0 spec)
One solution could be to hard code the OAuth secret into the code, but not as a plain string. Obfuscate it in some way - split it into segments, shift characters by an offset, rotate it - do any or all of these things. A cracker can analyse your byte code and find strings, but the obfuscation code might be hard to figure out.
It's not a foolproof solution, but a cheap one.
Depending on the value of the exploit, some genius crackers can go to greater lengths to find your secret code. You need to weigh the factors - cost of previously mentioned server side solution, incentive for crackers to spend more efforts on finding your secret code, and the complexity of the obfuscation you can implement.
Do not store the secret inside the application.
You need to have a server that can be accessed by the application over https (obviously) and you store the secret on it.
When someone want to login via your mobile/desktop application, your application will simply forward the request to the server that will then append the secret and send it to the service provider. Your server can then tell your application if it was successful or not.
Then if you need to get any sensitive information from the service (facebook, google, twitter, etc), the application ask your server and your server will give it to the application only if it is correctly connected.
There is not really any option except storing it on a server. Nothing on the client side is secure.
Note
That said, this will only protect you against malicious client but not client against malicious you and not client against other malicious clients (phising)...
OAuth is a much better protocol in browser than on desktop/mobile.
There is a new extension to the Authorization Code Grant Type called Proof Key for Code Exchange (PKCE). With it, you don't need a client secret.
PKCE (RFC 7636) is a technique to secure public clients that don't use
a client secret.
It is primarily used by native and mobile apps, but the technique can
be applied to any public client as well. It requires additional
support by the authorization server, so it is only supported on
certain providers.
from https://oauth.net/2/pkce/
For more information, you can read the full RFC 7636 or this short introduction.
Here's something to think about. Google offers two methods of OAuth... for web apps, where you register the domain and generate a unique key, and for installed apps where you use the key "anonymous".
Maybe I glossed over something in the reading, but it seems that sharing your webapp's unique key with an installed app is probably more secure than using "anonymous" in the official installed apps method.
With OAuth 2.0 you can simply use the client side flow to obtain an access token and use then this access token to authenticate all further requests. Then you don't need a secret at all.
A nice description of how to implement this can be found here: https://aaronparecki.com/articles/2012/07/29/1/oauth2-simplified#mobile-apps
I don't have a ton of experience with OAuth - but doesn't every request require not only the user's access token, but an application consumer key and secret as well? So, even if somebody steals a mobile device and tries to pull data off of it, they would need an application key and secret as well to be able to actually do anything.
I always thought the intention behind OAuth was so that every Tom, Dick, and Harry that had a mashup didn't have to store your Twitter credentials in the clear. I think it solves that problem pretty well despite it's limitations. Also, it wasn't really designed with the iPhone in mind.
I agree with Felixyz. OAuth whilst better than Basic Auth, still has a long way to go to be a good solution for mobile apps. I've been playing with using OAuth to authenticate a mobile phone app to a Google App Engine app. The fact that you can't reliably manage the consumer secret on the mobile device means that the default is to use the 'anonymous' access.
The Google App Engine OAuth implementation's browser authorization step takes you to a page where it contains text like:
"The site <some-site> is requesting access to your Google Account for the product(s) listed below"
YourApp(yourapp.appspot.com) - not affiliated with Google
etc
It takes <some-site> from the domain/host name used in the callback url that you supply which can be anything on the Android if you use a custom scheme to intercept the callback.
So if you use 'anonymous' access or your consumer secret is compromised, then anyone could write a consumer that fools the user into giving access to your gae app.
The Google OAuth authorization page also does contain lots of warnings which have 3 levels of severity depending on whether you're using 'anonymous', consumer secret, or public keys.
Pretty scary stuff for the average user who isn't technically savvy. I don't expect to have a high signup completion percentage with that kind of stuff in the way.
This blog post clarifies how consumer secret's don't really work with installed apps.
http://hueniverse.com/2009/02/should-twitter-discontinue-their-basic-auth-api/
Here I have answer the secure way to storing your oAuth information in mobile application
https://stackoverflow.com/a/17359809/998483
https://sites.google.com/site/greateindiaclub/mobil-apps/ios/securelystoringoauthkeysiniosapplication
Facebook doesn't implement OAuth strictly speaking (yet), but they have implemented a way for you not to embed your secret in your iPhone app: https://web.archive.org/web/20091223092924/http://wiki.developers.facebook.com/index.php/Session_Proxy
As for OAuth, yeah, the more I think about it, we are a bit stuffed. Maybe this will fix it.
None of these solutions prevent a determined hacker from sniffing packets sent from their mobile device (or emulator) to view the client secret in the http headers.
One solution could be to have a dynamic secret which is made up of a timestamp encrypted with a private 2-way encryption key & algorithm. The service then decrypts the secret and determines if the time stamp is +/- 5 minutes.
In this way, even if the secret is compromised, the hacker will only be able to use it for a maximum of 5 minutes.
I'm also trying to come up with a solution for mobile OAuth authentication, and storing secrets within the application bundle in general.
And a crazy idea just hit me: The simplest idea is to store the secret inside the binary, but obfuscated somehow, or, in other words, you store an encrypted secret. So, that means you've got to store a key to decrypt your secret, which seems to have taken us full circle. However, why not just use a key which is already in the OS, i.e. it's defined by the OS not by your application.
So, to clarify my idea is that you pick a string defined by the OS, it doesn't matter which one. Then encrypt your secret using this string as the key, and store that in your app. Then during runtime, decrypt the variable using the key, which is just an OS constant. Any hacker peeking into your binary will see an encrypted string, but no key.
Will that work?
As others have mentioned, there should be no real issue with storing the secret locally on the device.
On top of that, you can always rely on the UNIX-based security model of Android: only your application can access what you write to the file system. Just write the info to your app's default SharedPreferences object.
In order to obtain the secret, one would have to obtain root access to the Android phone.

API Security/Authorization

I am in need of advice on how best to tackle the following scenario and best practices to implement it.
Our company wants to overhaul its old IT systems and create new website app(s) and possibly mobile apps down the line for its employees and contractors to interact with.
So i was thinking about creating an api that both the website apps and mobile apps could be created from...
https://api.company.com/v1
The advice i need is in relation to security/authorization of the api. My thoughts at present in how to implement this would be that the employees/contractors would interact with the api through the companys website app(s)/mobile apps which would then communicate with the api and set the appropriate access permissions
WebsiteApp.company.com ->>> api.company.com/v1
mobileapp ->>> api.company.com/v1
First thoughts is just setting up a username and password for each user on the api side and let both the websiteapps and mobile apps use this. The problem however is that the contractors and possible some employees cannot be fully trusted and could pass on username and passwords to third parties without the company's permission. So my question is really what other security/authorization/authentication strategies should i be looking at to overcome this situation. In a perfect world each user would have to authorize each device/mobileapp/websiteapp he/she wants to access the api from...
Is OAuth 2.0 capable of this?, not sure if its capable of specific user/device/website scenario though ?
Technologies thinking of using are:-
API
Node.js with (Express js? or Restify?) , MongoDb
Consumer Apps
Website Apps (Angular Js, Backbone etc..)
Mobile Apps (PhoneGap, Jquery Mobile etc..)
Many Thanks
Jonathan
It seems that your main concern is that you can't trust the people you are giving access to, and if this is the case, you probably shouldn't be trying to give them access in the first place. If these apps are to be used for any confidential information or intellectual property that you are worried about someone else seeing if the contractor/employee gives away their password, then you have to consider the contractor/employee just taking the information and giving it away.
In this situation your username/password should suffice for authentication, however you should also consider very tight permissions on who can access what. If you are worried about information getting out, everything should be shown on a need-to-know basis. If a contractor doesn't need a specific piece of information, make sure it isn't provided to his account.
You could also consider tracking the locations (IPs) that an account is being accessed from. Perhaps when an account is accessed from a new location have the employee/contractor complete some task to validate the account; which could be anything from entering a validation code (similar to a two-factor authentication), to calling a help-line and having the location authorized.
This might be a bit late, but as i am going through the same process (What is the correct flow when using oAuth with the Reso​urce Owners Password​s Credentials Grant​)
We have not figured out the core implementation what you want to do sounds similar to what we are trying to do for our service.
From my understanding it depends on the apps and if they are trusted or not and what you plan to do with your API moving forwards. If the apps are trusted you could potentially use HTTP-Basic over SSL which is a viable solution.
For us, we are going to create a suite of trusted official apps (Web, Mobile etc) via the API and then open it up, so we are deciding to go through the pain of oAuth2 where for our apps we will use the Resource Owners Passwords Credentials Grant type where you swap a users user name and password for a token which is what the client will use to interact with your API with the trust implicitly defined.
When we open up the API for 3rd party consumption this model wont work and we will go through the processes that all the major sites do and get explicit permission from the user on what the 3rd party apps can do with their data.

How to Protect a private REST API

I'm currently thinking how I could protect my REST API which is used only by my mobile application from being used by other applications?
Could a API-Key be a good solution, because just me know the secret API key.
Is there a better solution?
Leon, you keep mentioning "someone else using my API with another application". So, you want to tie your API to be used only by one application? So, you don't want to give access rights to a user, you want to give them instead to an instance of your application running on the user's mobile device.
In essence: You don't trust the user!
Well, in that case you need to make sure your application is closed source, need to code your credentials into your application in such a way that nobody can retrieve them or store the credentials for it in a specially encrypted manner on the device, the decryption key for it being readable only by your application. In a way, you need to implement a form of DRM to prevent people from doing stuff with data on their mobile device. And you need to hope that nobody can reverse engineer it.
If your app becomes popular / interesting enough, count on the fact that people who are very, very good at this sort of thing will look at your application and will break your encryption before you know it. Maybe, if you put the same amount of effort into it as Skype has, maybe then you can ward them off for a while.
But ask yourself: Why bother? Why don't I trust my users? Is it really worth it to jump through hoops like this to prevent some other application from using my API?
Just lead your user through a registration process in which each app instance gets a unique key from the server (or a unique HTTP auth password) and stores that somewhere on the user's mobile device. Then, to access the interesting features in the API, require the presence of this key/password. But don't go through extreme length to obfuscate or encrypt the key when you store it locally, it's not worth it. If you every detect misuse later, you can always revoke the access rights for a particular account on the server anyway.
Use HTTP Authentication. REST is all about using the facilities available in HTTP, so the native HTTP auth should be used. With basic authentication you’ll have to use HTTPS though. If you cannot do that use HTTP digest auth or NTLM.
All of them have different strengths and weaknesses, and not every one of them might be supported by your HTTP server and client library.