Related
I am working on an android and iOS application that needs to have a password-less solution for login. We are trying to implement WebAuthn/Fido2 device.
The problem is that Fido is still new and there is no React-Native library that implements that. So I have a few questions regarding it.
Can we read and write our own key in the Fido2 device?
=> Till we get a proper library, I want to store an encrypted password on the fido2 device as a key, read it every time on login, and decrypt it. Is it sounds good to implement and is it possible to do?
#DevPy
To support WebAuthn/FIDO2 from your React Native iOS application, the recommended solution is to integrate one of two Apple iOS system browsers (ASWebAuthenticationSession or SFSafariViewController) that support WebAuthn APIs. ASWebAuthenticationSession would be my first choice as this browser is for authentication through a web service, specifically the OAuth 2 flow. This provides the interface, built-in APIs for interacting with the FIDO2 authenticator, like the YubiKey, and gives the developer control with callback to the session and authentication token. Another way to integrate WebAuthn is to utilize a third-party SDK for communicating with OAuth 2 providers. For example, AppAuth for iOS has a React Native bridge, available here. I believe the AppAuth SDK uses the ASWebAuthenticationSession.
As for the initial question of writing/reading your own custom key, the FIDO2 devices are limited in storage space but the YubiKey offers two options that may work for you. One is the option to create a static password (not encrypted) or utilize the Yubico OTP. Both options use the system keyboard to type out the password or OTP into any text/password field within your app. No SDK or system browser required.
FIDO2/WebAuthn is specifically a browser API. Since you're talking about authentication within a (React) native app then you'll probably want to fall back to equivalent native OS API's instead.
For Android you can use the Fido2ApiClient, which will let you leverage existing FIDO2 credentials on your server for in-app authentication:
https://developers.google.com/android/reference/com/google/android/gms/fido/fido2/Fido2ApiClient
I think the equivalent on the iOS side of native app development is Authentication Service. They have a page specifically about leveraging "passkeys" in your app that will probably help get you started:
https://developer.apple.com/documentation/authenticationservices/public-private_key_authentication
We want to write an react native app that:
-gets data over bluetooth from devices
-the app should send the data to our api
-it's important that the data is not tempered with or changed in any way
-the app is the only one that can send data to our api
I already read a lot about:
iOS - Keychain Services and
Android - Keystore
on the React Native docs: https://reactnative.dev/docs/security
And SafeNet(Android) or DevieCheck(IOS) (never mentioned on react native docs or articles I read)
What security layers should we use for our use case to make the api most secure and how can I implement them in react native?
We want to use the data from the api to verify the correctness of the same data passed to a smart contract that compares and evaluates them.
YOUR PROBLEM
We want to use the data from the api to verify the correctness of the same data passed to a smart contract that compares and evaluates them.
I congratulate you by having taken the time to understand that the API sitting in front of a blockchain needs to be protected against abuse in order to prevent the blockchain from ingesting unwanted data.
Defending an API it's not an easy task, but if you read carefully all I am about to say I hope that by the end you will have a new perspective on API and Mobile security, that will allow you to devise and architect a robust and secure solution.
WHO IS IN THE REQUEST VERSUS WHAT IS MAKING THE REQUEST
-the app is the only one that can send data to our api
This is a very hard problem to solve, but not an impossible one. To understand why you need to first know the difference between who is in the request and what is making it, otherwise any security you add may not be protecting your API as expected.
The Difference Between WHO and WHAT is Accessing the API Server
I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
So, think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
DATA INTEGRITY
-gets data over bluetooth from devices
-the app should send the data to our api
-it's important that the data is not tempered with or changed in any way
This is also very hard to solve. During the process of collecting the data and sending it to the API the data can be tampered with in several ways.
Manipulate Data with an Instrumentation Framework
-gets data over bluetooth from devices
While the data is being collected form the devices an instrumentation framework can be used to manipulate the data before sending it to the API. A popular instrumentation framework is Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
So, the attacker would inject a script to listen at runtime to the method that collects the data or to the one that sends the data to the API and then tamper with the data its being sent.
the app should send the data to our api
Manipulating Data with a MitM Attack
Another alternative is for the attacker to also use Frida to perform a MitM attack to allow a tool like mitmproxy to intercept and modify the request. You can learn how to perform a MitM attack with Frida by reading my article How to Bypass Certificate Pinning with Frida on an Android App:
Today I will show how to use the Frida instrumentation framework to hook into the mobile app at runtime and instrument the code in order to perform a successful MitM attack even when the mobile app has implemented certificate pinning.
Bypassing certificate pinning is not too hard, just a little laborious, and allows an attacker to understand in detail how a mobile app communicates with its API, and then use that same knowledge to automate attacks or build other services around it.
The injection of Frida scripts at runtime allows for almost unlimited possibilities in how to tamper with your data integrity or whatever the mobile app is doing at runtime.
POSSIBLE SOLUTIONS
Secure Storage
I already read a lot about:
iOS - Keychain Services and
Android - Keystore
on the React Native docs: https://reactnative.dev/docs/security
Using this mechanism is recommended, but you need to be aware that anything that is stored in secure storage will need to be accessed and used by the mobile app at some point, and this is when the attacker can use an instrumentation framework to hook at runtime into the mobile app code. For example, when retrieving a securely stored secret the attacker can extract it to use outside of the mobile app to automate API requests as if they were from the mobile app.
So, use it to make it harder for less skilled attackers to tamper with your mobile app, but always remember that more skilled attackers may find their way around it.
Protecting Data Integrity in the Mobile App
-it's important that the data is not tempered with or changed in any way
To protecting data from being tampered with before it arrives to the API server it's necessary that you employ some solutions, like RASP:
Runtime application self-protection (RASP) is a security technology that uses runtime instrumentation to detect and block computer attacks by taking advantage of information from inside the running software.
RASP technology is said to improve the security of software by monitoring its inputs, and blocking those that could allow attacks, while protecting the runtime environment from unwanted changes and tampering.
The issue of using only RASP is that the API server doesn't have visibility for the ongoing attacks on the mobile app, therefore not able to refuse requests from a mobile app under attack. Also, RASP can be bypassed by skilled attackers with the use of instrumentations frameworks, and the API server will not be aware of this happening, therefore will continue to serve requests, because it doesn't have a mechanism to know that what is making the request is indeed a genuine and un-tampered version of your mobile app.
Defending the API Server
I recommend you to read this answer I gave to the question How to secure an API REST for mobile app?, especially the sections Hardening and Shielding the Mobile App, Securing the API Server and A Possible Better Solution.
One of the solutions proposed is to use a Mobile App Attestation solution that runs outside the mobile device, for example on the cloud, therefore doesn't make client side decisions about the state of the mobile app and device is running on, instead they are done in the cloud service and transmitted to the API server as signed JWT token, that the API server can then used to verify that what is making the request is indeed the genuine and un-tampered version of the official mobile app.
Android Safetynet and iOS Devicecheck
And SafeNet(Android) or DevieCheck(IOS) (never mentioned on react native docs or articles I read)
Using the Android SafetyNet and iOS DeviceCheck runtime protections is for sure a good starting point, but you need to be aware of their scope, limitations and complexity. They can be complemented with a robust Mobile App Attestation solution to give you an higher level of security and confidence that your API server will be able to know when the request is not from what it expects, a genuine and un-tampered version of your mobile app.
Security Layers
What security layers should we use for our use case to make the api most secure and how can I implement them in react native?
I would not be approaching here how to implement it in React, because that is a huge topic and the exact code will depend on your current implementation, but I will summarize here the key points.
Security is always about adding as many layers as you can afford and are required by law, standards and business requirements. To summarize you should consider the following topics:
Don't hardcode secrets in your mobile app code, but if you really want to do it, at least use Native C code.
Obfuscate your mobile app code, because this will make it harder to reverse engineer the mobile app code in order to use instrumentations frameworks.
Use runtime protections in your mobile app code and give preference to the ones that don't make decisions on the client side and allow for the API server to verify that the request is indeed from what it expects, a genuine and un-tampered version of your mobile app, like describe in the Mobile App Attestation I mentioned previously.
Use certificate pinning to the public key to prevent MitM attacks, but wit h the awareness that it can be bypassed. I recommend you to read the section Preventing MitM Attacks in this answer I gave to another question where you will learn how to implement static certificate pinning. If you can, try to use instead dynamic certificate pinning to allow to remotely update the pins used by your mobile app.
In your API server you can use rate limiting but do not give back in the headers the info about the rate limit available, because that is like putting the key to your front door under the mat.
You can use Artificial Intelligence solutions, but be aware that they work in a negative identification model and are prone to false negatives and positives. If using a mobile app runtime protection that lets the API server know when is under attack then the use of AI solutions can be postponed until the API server needs to use other type of clients, like web apps.
This is not an exclusive list of topics you can consider to use in order to secure your mobile app and API server, but are the ones I think that more important for you to focus on.
DO YOU WANT TO GO THE EXTRA MILE?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
I recently read a lot of post and article about securing sensitive info into a React Native app. From what I understand, you can't fully protect your sensitive info but only make hacker's life harder to get them.
So, from that point of view, I would like to know if it wouldn't be "safer" to get those sensitive info (i.e. API keys) from an external server (i.e. Rest API).
I explain:
I know about MitM attacks, but would it be safer (and more flexible) to have my mobile app calling my API to get API keys on request thru HTTPS? This way, no sensitive info remains in the app binary files.
And to secure MitM attacks, I could frequently change those API key values so they would remains valid only on a short period of time.
I would like to hear anyone about PROS and CONS of such a system.
APIs Misconceptions
To prepare you for my answer I will first clear out some usual misconceptions around public/private APIs and about who vs what is really accessing your backend.
Public and Private APIs
I often see that developers think that their APIs are private, because they have no docs for it, have not advertise it anywhere, and many other reasons.
The truth is that when you release a mobile app all the APIs it communicates with are now belonging to the public domain and if this APIs don't have an authentication and authorization mechanism in place then all data behind it can be accessed by anyone in the internet that reverse engineers how your mobile app works. Even when APIs have authentication in place they may be vulnerable to bad implementations of it and some have a total lack of authorization mechanisms or buggy ones as per OWASP API Security Top 10 vulnerability list.
The Difference Between WHO and WHAT is Accessing the API Server
I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
So think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
API Keys Service
I know about MitM attacks, but would it be safer (and more flexible) to have my mobile app calling my API to get API keys on request thru HTTPS? This way, no sensitive info remains in the app binary files.
While you indeed don't have any sensitive info in the app binary files you haven't solved the problem. In my opinion you are more exposed, because you are now getting the API keys from an public and open API endpoint.
I say it's open because you don't have any safeguard that what is making the request to it are indeed a genuine and untampered version of your mobile app.
So, now all an attacker needs to do is to MitM attack your mobile app or decompile it to see from which API endpoint you grab the API keys to make the requests, and then replicate the procedure from their automated scripts/bots, therefore doesn't really matter that you don't have them hardcoded in the app binary any more.
API Keys Rotation
And to secure MitM attacks, I could frequently change those API key values so they would remains valid only on a short period of time.
In light of the above explanation , on the API Keys Service section, you can even make the API keys restricted to be used only for one single request that the attacker will still succeed, because the attacker will be able to query the API endpoint to obtain API keys as if he was what the backend expects, a genuine and untampered version of your mobile app.
So, to be clear I am in favour of API keys rotation but only if you can get them into your mobile app from a secured external source, but your approach is open to be accessed by anyone on the internet.
I would like to hear anyone about PROS and CONS of such a system.
The system you are describing is not advisable to implement, because without being secured it's just a security disaster asking to occur. Securing it with an API key it's just going back to the initial problem with the disadvantage that your giving back to the mobile the sensitive info you want to keep away from hackers.
The best approach for you is to use a Reverse Proxy to keep the API keys private and secured from prying eyes.
The Reverse Proxy Approach
So, from that point of view, I would like to know if it wouldn't be "safer" to get those sensitive info (i.e. API keys) from an external server (i.e. Rest API).
What you are looking for is to implement a Reverse Proxy, that is usually used to protect access to third party APIs and your own APIs, by having the mobile app delegating the API requests to the Reverse Proxy, instead of asking for the API keys to make them from inside the mobile app.
The Reverse Proxy approach will avoid to have several API keys harcoded in the mobile app, but you still need one API key to protect access to the Reverse Proxy, therefore you are still vulnerable to the MitM attacks and to static reverse engineering of your mobile app.
The advantage now is that all your sensitive API keys are private and in an environment you can control and employ as many security measures you need to ensure that the request are indeed from what your backend expects, a genuine and untampered version of your mobile app.
Learn more about using a Reverse Proxy by reading the article I wrote Using a Reverse Proxy to Protect Third Party APIs:
In this article you will start by learning what Third Party APIs are, and why you shouldn’t access them directly from within your mobile app. Next you will learn what a Reverse Proxy is, followed by when and why you should use it to protect the access to the Third Party APIs used in your mobile app.
While the article focus on third party APIs the principle also applies to use with your own APIs.
Preventing MitM Attacks
When certificate pinning is implemented in a mobile app to secure the https channel then the sensitive data on the API requests is more safeguarded from being extracted.
I recommend you to read the section Preventing MitM Attacks in this answer I gave to another question where you will learn how to implement static certificate pinning and how to bypass it.
Despite being possible to bypass certificate pinning I still strongly recommend it to be implemented, because it reduces the attack surface on your mobile app.
A Possible Better Solution
I recommend you to read this answer I gave to the question How to secure an API REST for mobile app?, especially the sections Hardening and Shielding the Mobile App, Securing the API Server and A Possible Better Solution.
The solution will be the use of a Mobile App Attestation solution that will allow your backend to have an high degree of confidence that the request is from what it expects, a genuine and untampered version of your mobile app.
Do You Want To Go The Extra Mile?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
i'm building a website that uses WebRTC to share audio and video. Now i'd like to access WebRTC features on Android devices so i can create an app that can receives audio and video streams from the website.
I've looked for a technology allowing me to do that and I've found SkylinkJS.
It looks great but i'm wondering something. Can i build a custom authentication system on top of SkylinkJS logic. What i mean is that i'd like to make sure the connection to SkylinkJS rooms are initiated by users actually authenticated on my platform.
At the moment, i do that using socket.io but i can do it since i'm using raw WebRTC. How can i do that using SkylinkJS? Using the REST API?
Thanks.
PS: i cannot tag this question with 'skylinkjs' since it's a new tag, but it mights be cool if someone could do it.
Yes you can integrate that with the REST API in this Applications REST API link here - . You can generate your own credentials.
You can generate the connecting credentials from your server and then when the User logs in, generate the credentials for Users to connect to the Room. See more in their support article.
SkylinkJS uses key based authentication mechanism to authenticate against the Temasys signaling servers. This ensures that any application using Skylink can only connect to calls in your application if the app can provide the same secure keys (from your Temasys developer account).
Your best bet in looping in Android would be to use the android counterpart. http://skylink.io/android/
When using the OAuth protocol, you need a secret string obtained from the service you want to delegate to. If you are doing this in a web app, you can simply store the secret in your data base or on the file system, but what is the best way to handle it in a mobile app (or a desktop app for that matter)?
Storing the string in the app is obviously not good, as someone could easily find it and abuse it.
Another approach would be to store it on your server, and have the app fetch it on every run, never storing it on the phone. This is almost as bad, because you have to include the URL in the app.
The only workable solution I can come up with is to first obtain the Access Token as normal (preferably using a web view inside the app), and then route all further communication through our server, which would append the secret to the request data and communicate with the provider. Then again, I'm a security noob, so I'd really like to hear some knowledgeable peoples' opinions on this. It doesn't seem to me that most apps are going to these lengths to guarantee security (for example, Facebook Connect seems to assume that you put the secret into a string right in your app).
Another thing: I don't believe the secret is involved in initially requesting the Access Token, so that could be done without involving our own server. Am I correct?
Yes, this is an issue with the OAuth design that we are facing ourselves. We opted to proxy all calls through our own server. OAuth wasn't entirely flushed out in respect of desktop apps. There is no prefect solution to the issue that I've found without changing OAuth.
If you think about it and ask the question why we have secrets, is mostly for provision and disabling apps. If our secret is compromised, then the provider can only really revoke the entire app. Since we have to embed our secret in the desktop app, we are sorta screwed.
The solution is to have a different secret for each desktop app. OAuth doesn't make this concept easy. One way is have the user go and create an secret on their own and enter the key on their own into your desktop app (some facebook apps did something similar for a long time, having the user go and create facebook to setup their custom quizes and crap). It's not a great experience for the user.
I'm working on proposal for a delegation system for OAuth. The concept is that using our own secret key we get from our provider, we could issue our own delegated secret to our own desktop clients (one for each desktop app basically) and then during the auth process we send that key over to the top level provider that calls back to us and re-validates with us. That way we can revoke on own secrets we issue to each desktop client. (Borrowing a lot of how this works from SSL). This entire system would be prefect for value-add webservices as well that pass on calls to a third party webservice.
The process could also be done without delegation verification callbacks if the top level provider provides an API to generate and revoke new delegated secrets. Facebook is doing something similar by allowing facebook apps to allow users to create sub-apps.
There are some talks about the issue online:
http://blog.atebits.com/2009/02/fixing-oauth/
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/629b03475a3d78a1/de1071bf4b820c14#de1071bf4b820c14
Twitter and Yammer's solution is a authentication pin solution:
https://dev.twitter.com/oauth/pin-based
https://www.yammer.com/api_oauth_security_addendum.html
With OAUth 2.0, you can store the secret on the server. Use the server to acquire an access token that you then move to the app and you can make calls from the app to the resource directly.
With OAuth 1.0 (Twitter), the secret is required to make API calls. Proxying calls through the server is the only way to ensure the secret is not compromised.
Both require some mechanism that your server component knows it is your client calling it. This tends to be done on installation and using a platform specific mechanism to get an app id of some kind in the call to your server.
(I am the editor of the OAuth 2.0 spec)
One solution could be to hard code the OAuth secret into the code, but not as a plain string. Obfuscate it in some way - split it into segments, shift characters by an offset, rotate it - do any or all of these things. A cracker can analyse your byte code and find strings, but the obfuscation code might be hard to figure out.
It's not a foolproof solution, but a cheap one.
Depending on the value of the exploit, some genius crackers can go to greater lengths to find your secret code. You need to weigh the factors - cost of previously mentioned server side solution, incentive for crackers to spend more efforts on finding your secret code, and the complexity of the obfuscation you can implement.
Do not store the secret inside the application.
You need to have a server that can be accessed by the application over https (obviously) and you store the secret on it.
When someone want to login via your mobile/desktop application, your application will simply forward the request to the server that will then append the secret and send it to the service provider. Your server can then tell your application if it was successful or not.
Then if you need to get any sensitive information from the service (facebook, google, twitter, etc), the application ask your server and your server will give it to the application only if it is correctly connected.
There is not really any option except storing it on a server. Nothing on the client side is secure.
Note
That said, this will only protect you against malicious client but not client against malicious you and not client against other malicious clients (phising)...
OAuth is a much better protocol in browser than on desktop/mobile.
There is a new extension to the Authorization Code Grant Type called Proof Key for Code Exchange (PKCE). With it, you don't need a client secret.
PKCE (RFC 7636) is a technique to secure public clients that don't use
a client secret.
It is primarily used by native and mobile apps, but the technique can
be applied to any public client as well. It requires additional
support by the authorization server, so it is only supported on
certain providers.
from https://oauth.net/2/pkce/
For more information, you can read the full RFC 7636 or this short introduction.
Here's something to think about. Google offers two methods of OAuth... for web apps, where you register the domain and generate a unique key, and for installed apps where you use the key "anonymous".
Maybe I glossed over something in the reading, but it seems that sharing your webapp's unique key with an installed app is probably more secure than using "anonymous" in the official installed apps method.
With OAuth 2.0 you can simply use the client side flow to obtain an access token and use then this access token to authenticate all further requests. Then you don't need a secret at all.
A nice description of how to implement this can be found here: https://aaronparecki.com/articles/2012/07/29/1/oauth2-simplified#mobile-apps
I don't have a ton of experience with OAuth - but doesn't every request require not only the user's access token, but an application consumer key and secret as well? So, even if somebody steals a mobile device and tries to pull data off of it, they would need an application key and secret as well to be able to actually do anything.
I always thought the intention behind OAuth was so that every Tom, Dick, and Harry that had a mashup didn't have to store your Twitter credentials in the clear. I think it solves that problem pretty well despite it's limitations. Also, it wasn't really designed with the iPhone in mind.
I agree with Felixyz. OAuth whilst better than Basic Auth, still has a long way to go to be a good solution for mobile apps. I've been playing with using OAuth to authenticate a mobile phone app to a Google App Engine app. The fact that you can't reliably manage the consumer secret on the mobile device means that the default is to use the 'anonymous' access.
The Google App Engine OAuth implementation's browser authorization step takes you to a page where it contains text like:
"The site <some-site> is requesting access to your Google Account for the product(s) listed below"
YourApp(yourapp.appspot.com) - not affiliated with Google
etc
It takes <some-site> from the domain/host name used in the callback url that you supply which can be anything on the Android if you use a custom scheme to intercept the callback.
So if you use 'anonymous' access or your consumer secret is compromised, then anyone could write a consumer that fools the user into giving access to your gae app.
The Google OAuth authorization page also does contain lots of warnings which have 3 levels of severity depending on whether you're using 'anonymous', consumer secret, or public keys.
Pretty scary stuff for the average user who isn't technically savvy. I don't expect to have a high signup completion percentage with that kind of stuff in the way.
This blog post clarifies how consumer secret's don't really work with installed apps.
http://hueniverse.com/2009/02/should-twitter-discontinue-their-basic-auth-api/
Here I have answer the secure way to storing your oAuth information in mobile application
https://stackoverflow.com/a/17359809/998483
https://sites.google.com/site/greateindiaclub/mobil-apps/ios/securelystoringoauthkeysiniosapplication
Facebook doesn't implement OAuth strictly speaking (yet), but they have implemented a way for you not to embed your secret in your iPhone app: https://web.archive.org/web/20091223092924/http://wiki.developers.facebook.com/index.php/Session_Proxy
As for OAuth, yeah, the more I think about it, we are a bit stuffed. Maybe this will fix it.
None of these solutions prevent a determined hacker from sniffing packets sent from their mobile device (or emulator) to view the client secret in the http headers.
One solution could be to have a dynamic secret which is made up of a timestamp encrypted with a private 2-way encryption key & algorithm. The service then decrypts the secret and determines if the time stamp is +/- 5 minutes.
In this way, even if the secret is compromised, the hacker will only be able to use it for a maximum of 5 minutes.
I'm also trying to come up with a solution for mobile OAuth authentication, and storing secrets within the application bundle in general.
And a crazy idea just hit me: The simplest idea is to store the secret inside the binary, but obfuscated somehow, or, in other words, you store an encrypted secret. So, that means you've got to store a key to decrypt your secret, which seems to have taken us full circle. However, why not just use a key which is already in the OS, i.e. it's defined by the OS not by your application.
So, to clarify my idea is that you pick a string defined by the OS, it doesn't matter which one. Then encrypt your secret using this string as the key, and store that in your app. Then during runtime, decrypt the variable using the key, which is just an OS constant. Any hacker peeking into your binary will see an encrypted string, but no key.
Will that work?
As others have mentioned, there should be no real issue with storing the secret locally on the device.
On top of that, you can always rely on the UNIX-based security model of Android: only your application can access what you write to the file system. Just write the info to your app's default SharedPreferences object.
In order to obtain the secret, one would have to obtain root access to the Android phone.