I am using Nuxt (with SSR/ PWA/ Vuejs/ Node.js/ Vuex/ Firestore) and would like to have a general idea or have an example for the following:
How can I secure an API key. For example to call MailChimp API
I am not familiar with how a hacker would see this if a poor solution is implemented. How can I verify it is not accessible to them?
I have found a number of "solutions" that recommend using environment Variables, but for every solution someone indicates it wont be secure. See:
https://github.com/nuxt-community/dotenv-module/issues/7
https://github.com/nuxt/nuxt.js/issues/2033
Perhaps server middleware is the answer? https://blog.lichter.io/posts/sending-emails-through-nuxtjs and https://www.youtube.com/watch?v=j-3RwvWZoaU (#11:30). I just need to add an email to a mail chimp account once entered, seems like a lot of overhead.
Also I see I store my Firestore api key as an environment variable already. Is this secure? When I open chrome dev tools-> sources-> page-> app.js i can see the api key right there (only tested in dev mode)!
You could use either a server middleware or https://github.com/nuxt-community/separate-env-module
Middleware itself wont work because it can be executed on client too, and code that is used in middleware will be available on client
For #2 you can check whether its included in client js sources. There way more other way hacker to get anything e.g. xss, but its general things and not much related to your code.
How can I secure an API key. For example to call MailChimp API
The cruel truth here is NO... In the client side you cannot secure any kind of secret, at least in a web app.
Just for you to have an idea of the techniques that can be used to protect an API and how they can be bypassed you can read this series of articles. While it is in the context of an Api serving a mobile app, the majority of it also applies for an API serving a web app. You will learn how api-keys, ouath tokens, hmac and certificate pinning can be used and bypassed.
Access to third part services must be always done in the back-end, never on the client side. With this approach you only have one place to protected, that is under your control.
For example in your case of accessing the Mailchimp API... If your back-end is the one in charge of doing it in behalf of your web app, then you can put security measures in place to detect and mitigate the usage of Mailchimp by your web app, like a User Behaviour Analytics (UBA) solution, but leaving for the web app the access to the Mailchimp API means that you only know that someone is abusing it when Mailchimp alerts your or you see it in their dashboards.
I am not familiar with how a hacker would see this if a poor solution is implemented. How can I verify it is not accessible to them?
As you may already know F12 to access the developers tools is one of the ways.
Another ways id to use the OWASP security tool Zed Attack Proxy (ZAP) , and using their words:
The OWASP Zed Attack Proxy (ZAP) is one of the world’s most popular free security tools and is actively maintained by hundreds of international volunteers*. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your applications. Its also a great tool for experienced pentesters to use for manual security testing.
Storing secrets in the front-end is a big no no in terms of security.
If your website is using server-side rendering (aka SSG or static website) and is hosted on Netlify it sound like a perfect job for the Netlify functions (server side logic) and environnement variables.
You can find some documentations here : Netlify functions.
Netlify functions are powered by AWS Lambda.
You would typically create a function folder into your project directory and write your functions there. Functions are built after each deploy but you can test your functions locally with Netlify Dev
Here is an example of function using Mailchimp service wit injected secrets :
https://github.com/tobilg/netlify-functions-landingpage/blob/169de175d04b165b5d4801b09cb250cd9a740da5/src/lambda/signup.js
I think privateRuntimeConfig, by which secrets are only available on the server side is another workable solution here, if you're in a situation where you only need to access an API during Server Side Rendering.
https://nuxtjs.org/tutorials/moving-from-nuxtjs-dotenv-to-runtime-config/#misconceptions:~:text=privateRuntimeConfig%20should%20hold%20all%20env%20variables%20that%20are%20private%20and%20that%20should%20not%20be%20exposed%20on%20the%20frontend.%20This%20could%20include%20a%20reference%20to%20your%20API%20secret%20tokens%20for%20example.
When using the OAuth protocol, you need a secret string obtained from the service you want to delegate to. If you are doing this in a web app, you can simply store the secret in your data base or on the file system, but what is the best way to handle it in a mobile app (or a desktop app for that matter)?
Storing the string in the app is obviously not good, as someone could easily find it and abuse it.
Another approach would be to store it on your server, and have the app fetch it on every run, never storing it on the phone. This is almost as bad, because you have to include the URL in the app.
The only workable solution I can come up with is to first obtain the Access Token as normal (preferably using a web view inside the app), and then route all further communication through our server, which would append the secret to the request data and communicate with the provider. Then again, I'm a security noob, so I'd really like to hear some knowledgeable peoples' opinions on this. It doesn't seem to me that most apps are going to these lengths to guarantee security (for example, Facebook Connect seems to assume that you put the secret into a string right in your app).
Another thing: I don't believe the secret is involved in initially requesting the Access Token, so that could be done without involving our own server. Am I correct?
Yes, this is an issue with the OAuth design that we are facing ourselves. We opted to proxy all calls through our own server. OAuth wasn't entirely flushed out in respect of desktop apps. There is no prefect solution to the issue that I've found without changing OAuth.
If you think about it and ask the question why we have secrets, is mostly for provision and disabling apps. If our secret is compromised, then the provider can only really revoke the entire app. Since we have to embed our secret in the desktop app, we are sorta screwed.
The solution is to have a different secret for each desktop app. OAuth doesn't make this concept easy. One way is have the user go and create an secret on their own and enter the key on their own into your desktop app (some facebook apps did something similar for a long time, having the user go and create facebook to setup their custom quizes and crap). It's not a great experience for the user.
I'm working on proposal for a delegation system for OAuth. The concept is that using our own secret key we get from our provider, we could issue our own delegated secret to our own desktop clients (one for each desktop app basically) and then during the auth process we send that key over to the top level provider that calls back to us and re-validates with us. That way we can revoke on own secrets we issue to each desktop client. (Borrowing a lot of how this works from SSL). This entire system would be prefect for value-add webservices as well that pass on calls to a third party webservice.
The process could also be done without delegation verification callbacks if the top level provider provides an API to generate and revoke new delegated secrets. Facebook is doing something similar by allowing facebook apps to allow users to create sub-apps.
There are some talks about the issue online:
http://blog.atebits.com/2009/02/fixing-oauth/
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/629b03475a3d78a1/de1071bf4b820c14#de1071bf4b820c14
Twitter and Yammer's solution is a authentication pin solution:
https://dev.twitter.com/oauth/pin-based
https://www.yammer.com/api_oauth_security_addendum.html
With OAUth 2.0, you can store the secret on the server. Use the server to acquire an access token that you then move to the app and you can make calls from the app to the resource directly.
With OAuth 1.0 (Twitter), the secret is required to make API calls. Proxying calls through the server is the only way to ensure the secret is not compromised.
Both require some mechanism that your server component knows it is your client calling it. This tends to be done on installation and using a platform specific mechanism to get an app id of some kind in the call to your server.
(I am the editor of the OAuth 2.0 spec)
One solution could be to hard code the OAuth secret into the code, but not as a plain string. Obfuscate it in some way - split it into segments, shift characters by an offset, rotate it - do any or all of these things. A cracker can analyse your byte code and find strings, but the obfuscation code might be hard to figure out.
It's not a foolproof solution, but a cheap one.
Depending on the value of the exploit, some genius crackers can go to greater lengths to find your secret code. You need to weigh the factors - cost of previously mentioned server side solution, incentive for crackers to spend more efforts on finding your secret code, and the complexity of the obfuscation you can implement.
Do not store the secret inside the application.
You need to have a server that can be accessed by the application over https (obviously) and you store the secret on it.
When someone want to login via your mobile/desktop application, your application will simply forward the request to the server that will then append the secret and send it to the service provider. Your server can then tell your application if it was successful or not.
Then if you need to get any sensitive information from the service (facebook, google, twitter, etc), the application ask your server and your server will give it to the application only if it is correctly connected.
There is not really any option except storing it on a server. Nothing on the client side is secure.
Note
That said, this will only protect you against malicious client but not client against malicious you and not client against other malicious clients (phising)...
OAuth is a much better protocol in browser than on desktop/mobile.
There is a new extension to the Authorization Code Grant Type called Proof Key for Code Exchange (PKCE). With it, you don't need a client secret.
PKCE (RFC 7636) is a technique to secure public clients that don't use
a client secret.
It is primarily used by native and mobile apps, but the technique can
be applied to any public client as well. It requires additional
support by the authorization server, so it is only supported on
certain providers.
from https://oauth.net/2/pkce/
For more information, you can read the full RFC 7636 or this short introduction.
Here's something to think about. Google offers two methods of OAuth... for web apps, where you register the domain and generate a unique key, and for installed apps where you use the key "anonymous".
Maybe I glossed over something in the reading, but it seems that sharing your webapp's unique key with an installed app is probably more secure than using "anonymous" in the official installed apps method.
With OAuth 2.0 you can simply use the client side flow to obtain an access token and use then this access token to authenticate all further requests. Then you don't need a secret at all.
A nice description of how to implement this can be found here: https://aaronparecki.com/articles/2012/07/29/1/oauth2-simplified#mobile-apps
I don't have a ton of experience with OAuth - but doesn't every request require not only the user's access token, but an application consumer key and secret as well? So, even if somebody steals a mobile device and tries to pull data off of it, they would need an application key and secret as well to be able to actually do anything.
I always thought the intention behind OAuth was so that every Tom, Dick, and Harry that had a mashup didn't have to store your Twitter credentials in the clear. I think it solves that problem pretty well despite it's limitations. Also, it wasn't really designed with the iPhone in mind.
I agree with Felixyz. OAuth whilst better than Basic Auth, still has a long way to go to be a good solution for mobile apps. I've been playing with using OAuth to authenticate a mobile phone app to a Google App Engine app. The fact that you can't reliably manage the consumer secret on the mobile device means that the default is to use the 'anonymous' access.
The Google App Engine OAuth implementation's browser authorization step takes you to a page where it contains text like:
"The site <some-site> is requesting access to your Google Account for the product(s) listed below"
YourApp(yourapp.appspot.com) - not affiliated with Google
etc
It takes <some-site> from the domain/host name used in the callback url that you supply which can be anything on the Android if you use a custom scheme to intercept the callback.
So if you use 'anonymous' access or your consumer secret is compromised, then anyone could write a consumer that fools the user into giving access to your gae app.
The Google OAuth authorization page also does contain lots of warnings which have 3 levels of severity depending on whether you're using 'anonymous', consumer secret, or public keys.
Pretty scary stuff for the average user who isn't technically savvy. I don't expect to have a high signup completion percentage with that kind of stuff in the way.
This blog post clarifies how consumer secret's don't really work with installed apps.
http://hueniverse.com/2009/02/should-twitter-discontinue-their-basic-auth-api/
Here I have answer the secure way to storing your oAuth information in mobile application
https://stackoverflow.com/a/17359809/998483
https://sites.google.com/site/greateindiaclub/mobil-apps/ios/securelystoringoauthkeysiniosapplication
Facebook doesn't implement OAuth strictly speaking (yet), but they have implemented a way for you not to embed your secret in your iPhone app: https://web.archive.org/web/20091223092924/http://wiki.developers.facebook.com/index.php/Session_Proxy
As for OAuth, yeah, the more I think about it, we are a bit stuffed. Maybe this will fix it.
None of these solutions prevent a determined hacker from sniffing packets sent from their mobile device (or emulator) to view the client secret in the http headers.
One solution could be to have a dynamic secret which is made up of a timestamp encrypted with a private 2-way encryption key & algorithm. The service then decrypts the secret and determines if the time stamp is +/- 5 minutes.
In this way, even if the secret is compromised, the hacker will only be able to use it for a maximum of 5 minutes.
I'm also trying to come up with a solution for mobile OAuth authentication, and storing secrets within the application bundle in general.
And a crazy idea just hit me: The simplest idea is to store the secret inside the binary, but obfuscated somehow, or, in other words, you store an encrypted secret. So, that means you've got to store a key to decrypt your secret, which seems to have taken us full circle. However, why not just use a key which is already in the OS, i.e. it's defined by the OS not by your application.
So, to clarify my idea is that you pick a string defined by the OS, it doesn't matter which one. Then encrypt your secret using this string as the key, and store that in your app. Then during runtime, decrypt the variable using the key, which is just an OS constant. Any hacker peeking into your binary will see an encrypted string, but no key.
Will that work?
As others have mentioned, there should be no real issue with storing the secret locally on the device.
On top of that, you can always rely on the UNIX-based security model of Android: only your application can access what you write to the file system. Just write the info to your app's default SharedPreferences object.
In order to obtain the secret, one would have to obtain root access to the Android phone.
I am working with Bigcommerce api using oauth. I am currently in development phase. I have given auth callback url as
http://localhost:3000/resource_callback.
I am unable to get store hash in context. It is only sending scope, and code. What am I missing here? Just using http instead of https is the reason? Please help me with proper direction.
If you are receiving the Auth Callback Request but it only has code and scope query properties then the problem is how you are installing your app. At this time it is necessary to install an app directly through your store's Control Panel, rather than using a link to do the install (as is common with most oAuth implementations). The use of a link for the install is something that will likely be added in the future but oAuth on BC right now is geared towards public applications installed through the store.
That being said, it is possible to make oAuth credentials for a store even without making it a public application. Please follow the long answer seen on this question:
Can BigCommerce Private Apps use OAuth
This will cover the full process for generating oAuth API tokens from registering an app to installing into a store and beyond. Based on your question you should start at the Generate the Auth Callback Request section. If you follow the steps there then your Auth Callback Request will include the context property as well as the other two.
Update
You can now generate oAuth tokens in a store from Advanced Settings > API Accounts. As a result it is no longer necessary to install a draft app into a store for the sole purpose of generating oAuth tokens. You will still want to do this if you are developing an app for the BC App marketplace or developing a user interface for your app that you want to live in the Control Panel of the store.
Just went through the same thing. See here: Bigcommerce Authentication code. Let me know if you need more details. SSL is mandatory.
In the context of WCF/Web Services/WS-Trust federated security, what are the generally accepted ways to authenticate an application, rather than a user? From what I gather, it seems like certificate authentication would be the way to go, IE generate a certificate specifically for the application. Am I on the right track here? Are there other alternatives to consider?
What you are trying to do is solve the general Digital Rights Management problem, which is an unsolved problem at the moment.
There are a whole host of options for remote attestation that involve trying to hide secrets of some sort (traditional secret keys, or semi-secret behavioural characteristics).
Some simple examples that might deter casual users of your API from working around it:
Include &officialclient=yes in the request
Include &appkey=<some big random key> in the request
Store a secret with the app and use a simple challenge/response: send a random nonce to the app and the app returns HMAC(secret,nonce))
In general however the 'defenders advantage' is quite small - however much effort you put in to try and authenticate that the bit of software talking to you is in fact your software, it isn't going to take your attacker/user much more effort to emulate it. (To break the third example I gave, you don't even need to reverse engineer the official client - the user can just hook up the official client to answer the challenges their own client receives.)
The more robust avenue you can pursue is licencing / legal options. A famous example would be Twitter, who prevent you from knocking up any old client through their API licence terms and conditions - if you created your own (popular) client that pretended to the Twitter API to be the official Twitter client, the assumption is their lawyers would come a-knocking.
If the application is under your control (e.g. your server) then by all means use a certificate.
If this is an application under a user control (desktop) then there is no real way to authenticate the app in a strong way. Even if you use certificate a user can extract it and send messages outside the context of that application.
If this is not a critical secure system you could do something good enough like embedding the certificate inside the application resources. But remember once the application is physically on the user machine every secret inside it can sooner or later be revealed.
I have REST services that I was planning on protecting with Windows Integrated Authentication (NTLM), as it should only be accessible to those internal to the company, and it will end up being on a website that is accessible by the public.
But, then I thought about mobile applications and I realized that Android, for example, won't be able to pass the credentials needed, so now I am stuck on how to protect it.
This is written in WCF 4.0, and my thought was to get the credentials, then determine who the user is and then check if they can use the GET request and see the data.
I don't want to force the user to pass passwords, as this will then be in the IIS log, and so is a security hole.
My present concern is for the GET request, as POST will be handled by the same method I expect.
One solution, which I don't think is a good option, would be to have them log into Sharepoint, then accept only forwarded reqests from Sharepoint.
Another approach would be to put my SSO solution in front of these services, which would then force people to log in if they don't have credentials, so the authentication would be done by SSO, and since the web service directory could be a subdirectory of the main SSO page, then I could decrypt the cookie and get the username that way, but, that would be annoying for the mobile users, which would include the senior management.
So, what is a way to secure a REST service so that it is known whom is making the request so that authorization decisions can be made, and will work for iphones, android and blackberry smartphones.
I have the same problem so let me give you the details and would also appreciate feedback. Since you are using an internal system you have one extra option that I have listed.
My first option isn't perfect, yes it could be hacked but still - better than nothing. With each request you pass the device's unique identifier along with a hash. You generate the hash using a salt embedded in the application along with the id. On the server you match the incoming hash with one you generate at the server, with the passed unique identifier. If someone "roots" their device, and is smart enough they could find the salt - you can obscure it further but ultimately it could be stolen. Also, I keep all requests on SSL to just help hide the process. My "enhancement" to this process is to pass back new salts after each request. New devices get 1 chance to obtain the next salt or get locked out ... not sure about that step yet.
Now another approach, is to have the user enter a "salt" or username and password only an internal user would know - the device obtains a token and then passes it (on SSL) with each request. Nobody outside your company could obtain that so this is probably best. I can't use this since my app is in the app store.
Hope that helps! Let us all know if you ever found a good solution.
My current solution, in order to protect data in the system, is to force people to first log in to the application that the REST services support (our learning management system), as I have written an SSO solution that will write out a cookie with encrypted data.
Then, the REST service will look for that cookie, which disappears when you close the browser, and I don't care if the cookie is expired, I just need the username from it, then I can look in a config file to see if that user is allowed to use that REST service.
This isn't ideal, and what I want to do is redirect through the SSO code, and have it then send the person back to the REST service, but that is not as simple as I hoped.
My SSO code has lots of redirects, and will redirect someone to a spot they pick in the learning management system, I just need to get it to work with the other application.