How to secure source files of a project? - code-access-security

In my company, we are developing against a local server, we do not keep a copy of the file-base on our personal laptops given by the company, that we take home.
There are 2 problems with that:
We can't work remotely efficiently.
File search (find + quick find in NetBeans, which I use a lot) is very slow.
What options do I have of securing the source code on my laptop to save it from thieves / hackers that may or may not lay hands on my laptop?

I recommend TrueCrypt, as it's easy to use, free, open-source, and works both on Windows and Linux.
It encrypts/decrypts on the fly, with no temporary "plain text" files.
You can just create an encrypted container or encrypt an entire drive, but I suspect a file container is enough in your case, as you need to protect just the source code (ie httpdocs folder).
It has an option to automatically dismount the container when locking the computer (WIN+L on Windows) so you would also instantly be protected when you leave your laptop (at a client's location, for example).
Choose one of the encryption methods (they're all strong encryption
algorithms - I prefer AES, as it's faster and it's the current
standard, but you can go with another one or a combination of 2-3
algos) and hashing method (I would suggest SHA-512 over the default RIPEMD-160)
Make sure you use a strong password (master key) when creating the
container
And make sure to backup your container file if you plan to
work for a while without commiting to SVN/Git. In case of hard-disk
failure, encrypted data is harder to recover (if not impossible in
some situations)

Well, one obvious option is to use the encryption. Keys used in modern crypto-tools are now long enough that it would take decades for some hacker to break it (I would assume here that you won't be a victim of NSA attack, even though they would probably have a hard time breaking a 1024 or 2048 bit key :-))
Which tool to choose mostly depends on your OS and budget. Good news is that there are many reliable free programs for such purpose, you can find lists of them here, here and here. Good luck.

Related

How it is possible to manipulate ios code [duplicate]

I recently read about decompilation of iOS apps and I'm now really concerned about it. As stated in the following posts (#1 and #2) it is possible to decompile an iOS which is distributed to the App Store. This can be done with jailbreak and I think with copying the app from memory to hdd. With some tools it is possible to
read out strings (strings tools)
dump the header files
reverse engineer to assembly code
It seems NOT to be possible to reverse engineer to Cocoa code.
As security is a feature of the software I create, I want to prevent bad users from reconstructing my security functions (encryption with key or log in to websites). So I came up with the following questions:
Can someone reconstruct my saving and encryption or login methods with assembly? I mean can he understand what exactly is going on (what is saved to which path at which time, which key is used etc., with what credentials is a login to which website performed)? I have no assembly understanding it looks like the matrix for me...
How can I securly use NSStrings which cannot be read out with strings or read in assembly? I know one can do obfuscation of strings - but this is still not secure, isn't it?
This is a problem that people have been chasing for years, and any sufficiently-motivated person with skills will be able to find ways to find out whatever information you don't want them to find out, if that information is ever stored on a device.
Without jailbreaking, it's possible to disassemble apps by using the purchased or downloaded binary. This is static inspection and is facilitated with standard disassembly tools. Although you need to have a tool which is good enough to add symbols from the linker and understand method calls sufficiently to be able to tease out what's going on. If you want to get a feel for how this works, check out hopper, it's a really good disassembly/reverse-engineering tool.
Specifically to your secure log in question, you have a bigger problem if you have a motivated attacker: system-based man-in-the-middle attacks. In this case, the attacker can shim out the networking code used by your system and see anything which is sent via standard networking. Therefore, you can't depend on being able to send any form of unencrypted data into a "secure" pipe at the OS or library level and expect it not to be seen. At a minimum you'll need to encrypt before getting the data into the pipe (i.e. you can't depend on sending any plain text to standard SSL libraries). You can compile your own set of SSL libraries and link them directly in to your App, which means you don't get any system performance and security enhancements over time, but you can manually upgrade your SSL libraries as necessary. You could also create your own encryption, but that's fraught with potential issues, since motivated hackers might find it easier to attack your wire protocol at that point (publicly-tested protocols like SSL are usually more secure than what you can throw together yourself, unless you are a particularly gifted developer with years of security/encryption experience).
However, all of this assumes that your attacker is sufficiently motivated. If you remove the low-hanging fruit, you may be able to prevent a casual hacker from making a simple attempt at figuring out your system. Some things to avoid:
storing plain-text encryption keys for either side of the encryption
storing keys in specifically named resources (a file named serverkey.text or a key stored in a plist with a name which contains key are both classics)
avoid simple passwords wherever possible
But, most important is creating systems where the keys (if any) stored in the application themselves are useless without information the user has to enter themselves (directly, or indirectly through systems such as OAUTH). The server should not trust the client for any important operation without having had some interaction with a user who can be trusted.
Apple's Keychain provides a good place to store authentication tokens, such as the ones retrieved during an OAUTH sequence. The API is a bit hard to work with, but the system is solid.
In the end, the problem is that no matter what you do, you're just upping the ante on the amount of work that it takes to defeat your measures. The attacker gets to control all of the important parts of the equation, so they will eventually defeat anything on the device. You are going to need to decide how much effort to put into securing the client, vs securing the server and monitoring for abuse. Since the attacker holds all of the cards on the device, your better approach is going to be methods that can be implemented on the server to enhance your goals.

Protecting NSUserDefaults from user or third party intrusion

On OSX a user can delete NSUserDefaults either using the defaults utility or deleting the plist. See man defaults. Is there a way this can be monitored, considering the app would like to catch and take appropriate actions if the user or any malicious program does this. Deleting either way does not invoke NSUserDefaultsDidChangeNotification at all and hence cannot be used.
If you need to secure settings, use the keychain. If you want to do so without incurring the pain and suffering of learning the keychain, there are several wrappers available that make string entries look like User Defaults.
There are two different things here: "if the user or any malicious program does this."
Regarding "if the user..." the answer is no. The user can do anything she wants. She can modify your program if she wants. It's her hardware. In order to prevent that, you have to develop effective DRM. You're not going to do that on top of NSUserDefaults. Apple can barely pull that off when they control every piece of the ecosystem. Basically, if you could solve this problem, Apple could use the same solution to prevent jailbreaks of iPhones.
If the idea is that you just want to obfuscate things a bit from the user, and aren't trying to deal with a motivated and skilled attacker, then NSUserDefaults is not the right tool. It has "user" right in the name. It's the user's stuff. Put your secret things in a hidden place. You'll have to come up with your own idea for that, since the only reason it would work at all is because it's a secret only you know. (This will be broken very quickly by a motivated attacker of course, but it will work for most of the users who any other system would work for; keep it simple.)
Regarding "any malicious program," that's a bit different, since you're protecting your user (a tractable problem) rather than trying to protect yourself from your user (an intractable problem). Storage in keychain would probably be a good choice. It has several built-in protections from malicious applications accessing it. You can also store your data on a server rather than on the box, which would protect against most malicious software (particularly assuming you sign your app, so malicious software can't modify it).
If what you're really trying to do is manage trials and licensing, there are several products on the market to help you obfuscate your keys, trial periods, etc. They spend their money developing and refining obfuscation and adapting as attackers break it. It's a full-time job. Unless you have a team to devote to it, I'd use one of the commercial products. It won't really fix your problem (those products are cracked all the time), but at least you can get back to real development.
If it's not sensitive then save it using NSUserDefaults. It it is sensitive the use the keychain. If you want to store information securely using NSUserDefaults (AES-356 bit encryption) then look into SecureNSUserDefaults(I have colleagues that use this but I haven't had a need to myself).
Otherwise, save your data (encrypted by your own means if you wish) using your own preferred data structure (dictionary or the like) to your app's documents folder.
Ultimately, anything that you store client side can be removed by the user. But you can try to stop it being deciphered and/or edited.

Hacking the source code of iOS app

I have developed an app for iPhone which communicates with a web server though XML based communication model.
In one of my source files, NetworkLayer, I created XML objects and send them to the web server. I have also declared all constants used in my app and also URLs (used to access my web server) in MyApp_Prefix.pch.
I want to ask if there is any way that some hacker can get access to my source code of generating XML objects or MyApp_Prefix.pch file if he has .app file of my app? Can anyone please help me in this regard?
No, he can't get your source code. But he can look at the HTTP requests and responses to see what XML you have created and what the server has sent back. Does that matter?
A hacker could reverse-engineer your code with some effort by looking at what your code tells the device to do. With some effort and knowledge of assembly and reverse engineering, one can see much of what your code contains. This does however require some serious effort and lots of time, so for most apps, it is unlikely that anyone would attempt to do so.
A much easier way would be to intercept the data on it's way to or from the server, and unless you are obfuscating the data, encrypting it or using SSL, you can't prevent this.
If you are worried about protecting your data, you should try some simple obfuscation. There are many ways to do this, the most popular one being XOR:ing your data with a key both the client and the server knows. Applying the key will flip the bits in your data and quickly and easily turn it into unreadable gibberish. Applying the same key again will flip the same bits again and you have perfectly readable XML.
It should be noted that XOR Encryption is quite possible and relatively easy to crack, especially since the key has to be stored as a part of the application, but it requires lots of time and effort to break through and doesn't qualify as encryption legally (eg. you shouldn't need to go through the whole Encryption Export thing when releasing the app), while still keeping the data gibberish-y enough to throw off most people - which is usually enough, unless your data is really sensitive, eg. if you're transferring payment credentials or similar.

How would I secure my program from Hex editors?

Ive been around the hacking block where I see people able to pull out email passwords and ftp details out of programs and I was wondering whats the best bet to protect those details while not crypting my vb.net program.
Encryption is the only way to really stop the dedicated hacker. But if this is about passwords that the program needs to know itself for operation, then it will have to have the key embedded as well (or maybe download it from your server every time). So the dedicated hacker could still get to it. Same problem the content industry faces in their Digital Restriction Management efforts : the player needs to be able to decode the media, they need to give people the player, so the player can be disassembled, and the encryption cracked.
All you can do is obfuscate things a little (or a lot).
Or give up on client software and run your program as a web service, where people cannot get to the code.
Obfuscation and encryption may delay a crack, but only for a while, because every encryption system can be broken with:
Access.
Enough time.
Because an exact digital copy of whatever can be made in minutes or seconds, time is guaranteed, so #1 becomes paramount.
Never store passwords in software or databases!. Take a look at the SO Q&A about Salting Passwords for the details.

What research-operating-system features would you advocate including in Google Chrome Operating System

Imagine that a large player is undertaking the construction of a new operating system, where backward compatibility requirements are limited to:
Run existing applications written in (or compiled to) JavaScript which are presented in HTML5 and styled with CSS3
Plug and play support for printers, external storage, and optical drives
Degrade gracefully when disconnected from the internet
Sufficient process quotas to support safely permitting tasks to run in the background, including timers
What specific features from existing research operating systems (such as Plan 9) would you like to see enter the mainstream through this channel? Please limit your suggestions to things that have been implemented, and provide a link to the implementation (or at least search terms).
From the Plan 9 docs:
Plan 9 began in the late 1980’s as an
attempt to have it both ways: to build
a system that was centrally
administered and cost-effective using
cheap modern microcomputers as its
computing elements.
Netbooks qualify as cheap modern microcomputers, and The Cloud qualifies as centrally administered. There is an opportunity to implement the features (in DDaviesBrackett's words) that we want netbooks to have other than by extending a 1970's time-sharing OS; the research operating systems may have proved the value of alternatives by example.
From the Plan 9 FAQ:
Subject: What are its key ideas?
Plan 9 exploits, as far as possible,
three basic technical ideas: first,
all the system objects present
themselves as named files that are
manipulated by read/write operations;
second, all these files may exist
either locally or remotely, and
respond to a standard protocol; third,
the file system name space - the set
of objects visible to a program - is
dynamically and individually
adjustable for each of the programs
running on a particular machine. The
first two of these ideas were
foreshadowed in Unix and to a lesser
extent in other systems, while the
third is new: it allows a new
engineering solution to the problems
of distributed computing and graphics.
Plan 9's approach means that
application programs don't need to
know where they are running; where,
and on what kind of machine, to run a
Plan 9 program is an economic decision
that doesn't affect the construction
of the application itself.
Does that not appear to be an excellent fit for the netbook/Cloud domain?
What operating system features I would advocate for Chrome OS?
Here my wish list as a Plan 9/Inferno fan:
Resources (ip stack, graphics, etc) as file systems.
Network transparent file system (ie., 9P).
Private per-process namespaces.
Factotum-like auth system (ie., no root user).
Pure UTF-8 everywhere.
Extremely lightweight processes.
Automatic snapshot and de-duplicating storage (ala venti+fossil).
And I guess many others, but this would be enough to make me quite happy.
This is not a 'OS feature' per see, but I would love to have a GUI with mouse-chording.
None.
I'd prefer for a new consumer OS, especially one targeted at Netbooks, to be very very good at doing the things that we already want OSes to be able to do rather than having time spent on features that are, by their nature, experimental.
(Of course, I'd be totally un-bothered by features I wasn't forced to use to develop on the platform; other people's toys are welcome as long as they don't make my job harder.)
I really think that Google might look into Plan9 for inspiration actually. Hearsay (the Internet) claims that several of those that initially developed UNIX and then later scrapped it for a better design (Plan9) are employed by Google. Google is also hosting its own version of Inferno, but I am not sure whether this is any central part of their plan. Further "evidence" could be that the plan9 authorization system (p9auth) for Linux was published by a Google researcher. The third "evidence" would be that Google claim that Chrome OS will have a novel security architecture.
The authorization seems to me to be one of the GREATEST parts of the Plan9 that can be included right now (/net would also be nice but there is no working code for that yet). The idea that a program that needs root access only gets limited access to the parts that are determined by the authorization server is definitely a great step forward compared to the now prevalent user/superuser/root division in Linux, where "a man in the middle" attacks can (theoretically) be done by gaining (full, as opposed to limited by the authorization server) root access via a bug in a program granted root.