On OSX a user can delete NSUserDefaults either using the defaults utility or deleting the plist. See man defaults. Is there a way this can be monitored, considering the app would like to catch and take appropriate actions if the user or any malicious program does this. Deleting either way does not invoke NSUserDefaultsDidChangeNotification at all and hence cannot be used.
If you need to secure settings, use the keychain. If you want to do so without incurring the pain and suffering of learning the keychain, there are several wrappers available that make string entries look like User Defaults.
There are two different things here: "if the user or any malicious program does this."
Regarding "if the user..." the answer is no. The user can do anything she wants. She can modify your program if she wants. It's her hardware. In order to prevent that, you have to develop effective DRM. You're not going to do that on top of NSUserDefaults. Apple can barely pull that off when they control every piece of the ecosystem. Basically, if you could solve this problem, Apple could use the same solution to prevent jailbreaks of iPhones.
If the idea is that you just want to obfuscate things a bit from the user, and aren't trying to deal with a motivated and skilled attacker, then NSUserDefaults is not the right tool. It has "user" right in the name. It's the user's stuff. Put your secret things in a hidden place. You'll have to come up with your own idea for that, since the only reason it would work at all is because it's a secret only you know. (This will be broken very quickly by a motivated attacker of course, but it will work for most of the users who any other system would work for; keep it simple.)
Regarding "any malicious program," that's a bit different, since you're protecting your user (a tractable problem) rather than trying to protect yourself from your user (an intractable problem). Storage in keychain would probably be a good choice. It has several built-in protections from malicious applications accessing it. You can also store your data on a server rather than on the box, which would protect against most malicious software (particularly assuming you sign your app, so malicious software can't modify it).
If what you're really trying to do is manage trials and licensing, there are several products on the market to help you obfuscate your keys, trial periods, etc. They spend their money developing and refining obfuscation and adapting as attackers break it. It's a full-time job. Unless you have a team to devote to it, I'd use one of the commercial products. It won't really fix your problem (those products are cracked all the time), but at least you can get back to real development.
If it's not sensitive then save it using NSUserDefaults. It it is sensitive the use the keychain. If you want to store information securely using NSUserDefaults (AES-356 bit encryption) then look into SecureNSUserDefaults(I have colleagues that use this but I haven't had a need to myself).
Otherwise, save your data (encrypted by your own means if you wish) using your own preferred data structure (dictionary or the like) to your app's documents folder.
Ultimately, anything that you store client side can be removed by the user. But you can try to stop it being deciphered and/or edited.
Related
I recently read about decompilation of iOS apps and I'm now really concerned about it. As stated in the following posts (#1 and #2) it is possible to decompile an iOS which is distributed to the App Store. This can be done with jailbreak and I think with copying the app from memory to hdd. With some tools it is possible to
read out strings (strings tools)
dump the header files
reverse engineer to assembly code
It seems NOT to be possible to reverse engineer to Cocoa code.
As security is a feature of the software I create, I want to prevent bad users from reconstructing my security functions (encryption with key or log in to websites). So I came up with the following questions:
Can someone reconstruct my saving and encryption or login methods with assembly? I mean can he understand what exactly is going on (what is saved to which path at which time, which key is used etc., with what credentials is a login to which website performed)? I have no assembly understanding it looks like the matrix for me...
How can I securly use NSStrings which cannot be read out with strings or read in assembly? I know one can do obfuscation of strings - but this is still not secure, isn't it?
This is a problem that people have been chasing for years, and any sufficiently-motivated person with skills will be able to find ways to find out whatever information you don't want them to find out, if that information is ever stored on a device.
Without jailbreaking, it's possible to disassemble apps by using the purchased or downloaded binary. This is static inspection and is facilitated with standard disassembly tools. Although you need to have a tool which is good enough to add symbols from the linker and understand method calls sufficiently to be able to tease out what's going on. If you want to get a feel for how this works, check out hopper, it's a really good disassembly/reverse-engineering tool.
Specifically to your secure log in question, you have a bigger problem if you have a motivated attacker: system-based man-in-the-middle attacks. In this case, the attacker can shim out the networking code used by your system and see anything which is sent via standard networking. Therefore, you can't depend on being able to send any form of unencrypted data into a "secure" pipe at the OS or library level and expect it not to be seen. At a minimum you'll need to encrypt before getting the data into the pipe (i.e. you can't depend on sending any plain text to standard SSL libraries). You can compile your own set of SSL libraries and link them directly in to your App, which means you don't get any system performance and security enhancements over time, but you can manually upgrade your SSL libraries as necessary. You could also create your own encryption, but that's fraught with potential issues, since motivated hackers might find it easier to attack your wire protocol at that point (publicly-tested protocols like SSL are usually more secure than what you can throw together yourself, unless you are a particularly gifted developer with years of security/encryption experience).
However, all of this assumes that your attacker is sufficiently motivated. If you remove the low-hanging fruit, you may be able to prevent a casual hacker from making a simple attempt at figuring out your system. Some things to avoid:
storing plain-text encryption keys for either side of the encryption
storing keys in specifically named resources (a file named serverkey.text or a key stored in a plist with a name which contains key are both classics)
avoid simple passwords wherever possible
But, most important is creating systems where the keys (if any) stored in the application themselves are useless without information the user has to enter themselves (directly, or indirectly through systems such as OAUTH). The server should not trust the client for any important operation without having had some interaction with a user who can be trusted.
Apple's Keychain provides a good place to store authentication tokens, such as the ones retrieved during an OAUTH sequence. The API is a bit hard to work with, but the system is solid.
In the end, the problem is that no matter what you do, you're just upping the ante on the amount of work that it takes to defeat your measures. The attacker gets to control all of the important parts of the equation, so they will eventually defeat anything on the device. You are going to need to decide how much effort to put into securing the client, vs securing the server and monitoring for abuse. Since the attacker holds all of the cards on the device, your better approach is going to be methods that can be implemented on the server to enhance your goals.
I'm creating a windows form app and the underlying code needs to be secure. In the code is database information and many equations which people should not be able to see.
What I'm asking is if I install the app on someone's computer, how easy is it for them to "break" into the application and view this sensitive information? If it's not difficult for them to find the code, are there ways to prevent this from happening? I would appreciate any input.
It's very easy to view code. Tools like ILSpy or .NET Reflector can practically show your code as you have written it in C# or VB.NET.
There are some possibilities, some free or cheap, some will cost you:
Obfuscation: This replaces names and sometimes logic in your excutable with other code that is hardly human readable. This is easy to do and there are tools like Confuser that do a good job, but the code is still there and can be read. It's only slowing attackers down.
Another option that I have evaluated myself is using hardware protection in the form of Dongles. Here the whole application is encrypted with a secret key that is stored on a smartcard. Portions of the code that are needed are decrypted on the fly at runtime and executed. Since the code is encrypted you can't read it easily. Solutions like Codemeter are pretty hard to beat (there are no real cracks for these if implemented correctly, which isn't hard. But this is not for free.
You always need to have the scope of your protection in mind. Who do you want to keep from getting your code?
The average guy who also has used .NET some times and knows how to google and download ILSpy? Obfuscate it mildly and he will be annoyed enough to leave it be.
Some other people who really know what they are doing but still without financial interest? Use some more drastic obfuscation like code restructuring and so on and they will probably not invest weeks of their time to just finding some formulas.
Some other company who is willing to put in the financial ressources and the knowhow of talented people to get your code to make a profit? Obfuscation will not help you. Maybe encryption will, maybe not.
We went with the Dongle solution since we also want to manage licensing in an easy way for the customers (of which most have very restricted online capabilities), while the code protection is a very nice additional feature.
You can use two-way cryptography before storing the information on the database. This question's answer has an explanation of how to do that very simply: Simple insecure two-way "obfuscation" for C#
About the equations, if they're hardcode in your app, and you don't deliver the source code of the app, the only way to retrieve it is using disassembly, wich, even with very simple tools, you have to be "computer savy" to do it.
Introduction
I came along this scenario while trying to find out a way to build a Decentralized and synchronized database structure which is open to everyone. Since both source code and database are public, I need to find out if there's a way to achieve a secure user authentication system. And if not, I'd like to know why not (it's not so obvious).
My idea is the next:
Suppose that I make it compulsory for users to have a password with numbers, capital letters and symbols (making it random so it does not appear in any dictionary). If I then use a hashing method with the greatest uniqueness, the possibility to crack this password will be very little.
Main problems:
Cracking dictionaries may content also those random-strange passwords.
Even if possibilites of cracking are few, crackers have all the time they want.
There must be an alternative:
Maybe I have to change the traditional user/password method, and make up something different. One solution could be sending each time a temporary access link to the user's mail (for which no one but himself knows password), but this is not a nice/comfortable way to access a website.
Thanks for reading. If you thing that I am trying something stupid, let me know and I'll be pleased (but I'd also appreciate a demonstration of my stupidity). Really, thanks.
Edit: I know I could use a third party service, like OpenId... but this is also a curiosity question for me ;)
"making it random so it does not appear in any dictionary"
You can't assume that. There are dictionaries with passwords made up of symbols and different characters.
Did you try having a look at Kerberos?
I am not sure if I understood your question correctly but I think You need to implement something like Kerberos.
I have proprietary information (formulas etc) stored in a property list which is shipped with the app.
The property list will be created and edited by the property list editor in Xcode.
How can this property list be encrypted in iOS 5 to avoid reading the property list formulas by the user? I am looking for a solution that is very transparent and easy to implement.
First, this is a very specific form of the question "how do I prevent my application from being reverse engineered." The answer is you don't. You can implement some basic things to try to hide the information from an attacker. But there is no way to give your code to an attacker who has complete control of the hardware it runs on and still prevent it from being reverse engineered. For general discussion about this, see Obfuscating Cocoa. More versions of this question are listed in Secure https encryption for iPhone app to webpage.
So the real question is how to hide your information from the casual attacker, realizing that the dedicated attacker will defeat your scheme. When you ask the question that way, you realize that part of the answer is "as easily as possible because it would be silly to spend a lot of effort doing it if it's not going to be highly successful."
So shuffle the file with a long, random shared secret. Stick the shared secret in your code, and press on with life. If you want a good tool, I recommend CommonCrypto since it's built-in. Just remember that this is just obfuscation. As long as the key is in the software, you can't consider it "encryption."
If your secrets are valuable enough that you you have significant ongoing technical and legal resources to protect them, then mail me some more details and we can talk about how you create an anti-piracy and trade-secret protection team within your organization (I have experience doing that and would be happy to provide consultation expertise). But remember, Apple controls the iPhone top to bottom and has spent serious money to secure it. It's still jailbroken. Unless you are going to apply resources on a similar scale, you shouldn't expect a better result. In almost all cases, you are better off spending your resources making your product better than in protecting what you've shipped.
Examples are in the iOS Developer Library.
https://developer.apple.com/library/ios/#documentation/Security/Conceptual/CertKeyTrustProgGuide/iPhone_Tasks/iPhone_Tasks.html#//apple_ref/doc/uid/TP40001358-CH208
Ive been around the hacking block where I see people able to pull out email passwords and ftp details out of programs and I was wondering whats the best bet to protect those details while not crypting my vb.net program.
Encryption is the only way to really stop the dedicated hacker. But if this is about passwords that the program needs to know itself for operation, then it will have to have the key embedded as well (or maybe download it from your server every time). So the dedicated hacker could still get to it. Same problem the content industry faces in their Digital Restriction Management efforts : the player needs to be able to decode the media, they need to give people the player, so the player can be disassembled, and the encryption cracked.
All you can do is obfuscate things a little (or a lot).
Or give up on client software and run your program as a web service, where people cannot get to the code.
Obfuscation and encryption may delay a crack, but only for a while, because every encryption system can be broken with:
Access.
Enough time.
Because an exact digital copy of whatever can be made in minutes or seconds, time is guaranteed, so #1 becomes paramount.
Never store passwords in software or databases!. Take a look at the SO Q&A about Salting Passwords for the details.