Is there a standard implementation for Electronic Signatures on fill-in-form web applications? - cryptography

I have a client who is interested in adding in electronic signature support to a long (40 question) seller application form. I'm a little stumped on whether there is an existing standard or process that's out there that folks in the financial world would expect to see?
I could certainly add in a system where we generate a bunch of text based on their responses, have the applicant sign it with their private key and upload a public key- but that seems like a lot to ask of people. Do non-nerds even have PGP installed these days?
Is there a standard approach to this out there? Anyone work in the financial world that's done this and had it work well?

We use Alphatrust's e-Sign Software.

What purpose is the signature trying to fill? Are you trying to verify that the form actually came from a specific seller? (If so, you would have to know their public key ahead of time.) Are you trying to hold the seller accountable for their answers at a later date? (In that case, you might need some kind of third-party involved.)
Sometimes people ask for electronic signatures just because they sound neat.

If these forms are meant to be shared throught to a general public you'll need to know (and can validate, that's the hardest part) all the producers of these amount of certificates people could use to sign these forms, and it's almost impossible.
With closed environments (like functionaries, doctors...) where all the users are suposed to hold a certificate (with a pre-known CA you trust) and you should be sure the form is sended by someone trusted (non repudation, integrity...) it's a better scenario to sign a form, otherwise I do not recomend you to use signed forms to achieve your goal.

Related

What is the algorithm for "blindness"?

Suppose I want to develop a IM application, and I want to ensure my users that I will not obtain their conversation information, that is, prove by a algorithm that I don't know about something.
How can I do that? Is their something similar to public-key method to do that?
I do not believe this is a solvable problem as long as the application you provide is a black box to the user. The way to achieve what you want is to provide the source code to your client so that the user can inspect it and potentially compile it themselves. For example, consider Tarsnap, which is targeting exactly this kind of problem (they provide online backups "for the truly paranoid"). The Tarsnap client is only provided in source form.
You can provide a similar (but weaker) assurance by publishing your protocol specification without publishing the source code to your implementation. This allows the user to inspect the protocol, determine whether it sends data that could be read in transit, and potentially implement their own client to protect themselves from any side-channels that might exist in your client.
The overarching rule is that cryptography is best done in public. Each piece of your system that is secret is a piece that the user must implicitly trust you on and cannot prove your behavior. The fewer secrets you enforce, the more trustworthy you can be.
Ultimately, however, I do not believe it is possible to prove that Eve does not know something. It is only possible to prove that Eve cannot discover something given that she stays within some set of rules.
As a quick proof by counter-example:
Alice sends message M to Bob using a provably-secure transform E(K,M).
Eve intercepts E(K,M), but since it is provably-secure, and she does not have K, she cannot decrypt it.
Eve begins dating Bob and convinces him to tell her K.
Eve performs D(K,E(K,M)) and recovers M.
Therefore, E(K,M) does not prove blindness over all possible attacks, despite being provably-secure over traditional attacks.

Ways to protect my framework in Xcode?

We intend to sell our framework on the net ,and it needs to be protected in a matter than if someone buying it, he can't put it on the net, or give it to other developers .
We dont want to find it all over the net after a few months.
I had a few ways in mind but each had its catch .
Give a unique ID to every developer, and program that id to the framework, so he must enter that to use it. problem is ,that he can give the framework with the id to anyone .
Ask for the device number and enable only that device in my framework for each developer. problem here is that when he put it on store, all users cant use that since they have others device id.
Use the net to check some how(??) which i preferred not to limit the users to that need.
I can program each framework with a code, that only me can extract, so when i find it on the net i can be sure what dev put it in there (it doesn't help-i cant sue everyone)
Is there any other way to make the framework per developer but also let it work on all other users at the appstore when its there ?
Thanks .
#diederikh makes very good points, and NicolasMiari also provides good insight. The best answer IMO is a combination of these two. (While keeping in mind diederikh's excellent advice that your goal is to come up with something simple that will make things hard on legitimate customers.)
Rather than recompiling your entire framework for every customer, you make your license key depend on their bundle identifier. They send you their bundle ID. You use your private key and sign their bundle id. This provides you a hash that you send to them. Now, at runtime, your framework uses the public key (which is not sensitive; you could publish it anywhere) to verify your signature. See SecKeyRawVerify() for doing that on iOS.
You can use this approach to create time-limited keys. Just include time stamps in the signed data.
Using this approach, you could, if you wanted, let customers test your framework indefinitely by using your bundle identifier. You would make a signed hash of that identifier available to trial customers. But as soon as they want to upload to AppStore, they would have to change the identifier and pay you for a new signed key.
There certainly is a way to get around this. Attackers could modify your framework to ignore the signature verification. But that's always true, and preventing that is better done with lawyers after the fact than with DRM that will only likely cause trouble for paying customers.
Look at how PSPDFKit does it. If you want to use it out of demo mode you'll have to call a method with an unique ID. This ID will enable functionality which is not available in demo mode.
You can also sign (with the codesign tool) the framework with an unique certificate for each customer.
I would not worry too much; if will always find a way around your locks.

How can I obfuscate a static password in objective c?

I need to hide a password to connect with a server. The problem is that the password is given by the service providers, so it is static.
I already thought of using the keychain, but the problem is that even if I use this I need to hard code the password to insert it in the keychain somewhere in the code.
So, is there a way to hide a static password to be available for my app, avoiding to write it in my code?
I would think about setting up a middle layer server - kind of a proxy - between users of your app and the service provider. It will allow you to:
set different password for each user
optionally give users a chance to change a password
have more control over who uses the service and what data is transmitted
be more independent of your service provider (e.g. change it anytime)
It will require more effort but may be it is more advantageous in long run.
This is not a solvable problem, and has been discussed at length around SO. For one "hub" question that includes links to several others, see Secure https encryption for iPhone app to webpage.
Using obscurity is not a horrible thing. But keep it simple. XOR the value with some random key. Done. Putting more and more layers buy you nothing, and cost you complexity (which means time and bugs, which are the enemies of both profit and security). If someone puts a debugger on your code, they're just going to log all the data you send to the server, so all the hoops you jump through to hide how you compute the password won't matter, because eventually you have to send it to the server. So keep it simple to stop people from just using "strings" to pull it out, and recognize that you cannot stop a debugger.
The only way to secure the service-provider's key is to put that key on your server, and then proxy for the service after authenticating the user. If you put it in the code, then it is discoverable, period. If this were a solvable problem, there would be no unlicensed copies of software, no unlicensed copies of music, no jailbreaks for iPhones, etc etc etc. If Apple can't stop reverse engineering when controlling every piece of the system from the hardware to the OS, you're not going to fix it inside of an app.
What you should be thinking about is how to recover if and when the key is lost. How do you discover that it's happened? How do you expire that key and create a new one? If you're shipping the key in the code, you must assume that it eventually will be discovered, and you should have a plan for dealing with it.
As a side note, one technique I've used in the past is to download the key from our server on-demand rather than encoding it anywhere in the app. We use authenticated HTTPS and check our certificates. Of course it is still possible to fool this system (it's always possible to fool a system that gives a client information they're only supposed to use a certain way), but the thinking is at least we can change the key more easily this way to stem the tide briefly if the key leaks.
This is the key and the box problem, you can keep putting your key in a new box every time and hide this key in a new box and you can keep on doing this.... but in the end you always have the last key... and nowhere to hide it.
Personally i would obfuscate the key to the keychain, and hide the real key in the keychain.
If it is a realy important secret you can use AES to encrypt your key, but then again your stuck with your encryption key, here you can use something that is device specific instead of a hardcoded value and generate your key out of that property.
For sure not perfect but will do the job in most cases.

Can individuals digitially sign Silverlight OOB apps for public release in their website name?

I have read all the blog posts on digital signing and checked out GoDaddy, Thawte and a couple of others. All of these say that you need to be a registered company and have official documentation and proof on incorporation etc.
I don't have any of that - I am a Sole Trader based in Australia who runs a social network (PokerDIY.com) for poker players, and now I am releasing a free app (PokerDIY Tourney Manager) and I need to let users type whilst in fullscreen mode (it's almost ironic that I am doing all this just for this). So I am looking into digitally signing my .xap so that I can run in elevated trust whilst OOB. In legal eyes I am just a hobbyist developer.
So I have a couple of questions that I would like answered before spending $100 on a certificate that I might not be able to use:
1) Can I buy a SL code-signing certificate as an individual? (Jeff Wilcox's blog (which is the most useful I have read on this matter for developers in my situation) seems to imply that you can and the Ksoftware site (https://secure.ksoftware.net/code_signing.html) seems to imply the same. However this leads to Q2:
2) Can this be used commercially - ie. if I decide to charge for my app (it's available globally for free but I will probably have a ad-free version at some point) - can I use this individual certificate that I purchased?
3) And can I register it to PokerDIY (my domain name) which is NOT a registered company. I would rather not register it to my full real name - this would look odd to a user if it said - PokerDIY Tourney Manager- Publisher: My Name. This is probably the most important as I doing this all for the perception of being a reputable entity.
There's really not much info out there on this and I dont want to make a mistake when it comes to $100 for a year's cert! (I wont go into how annoying it is to have to pay to release a free app just so people can type in fullscreen mode ;)
Thanks!
Yes, you can sign as an individual and use it for any purpose, including commercial purposes, with companies such as the KSoftware - but the larger providers like Thawte will not allow just an individual to do so. I do not know the specifics for Australia however so there could be regional differences, different companies, or other restrictions specific to your situation.
Your publisher name (your actual name) will appear in the elevation dialog for the Silverlight application. So it may look less professional to some, but it will still serve a purpose of providing a code signing certificate and some level of assurance for your customers.
But understand that your name and the address you provide for verification will be present in that certificate for anyone to see (answers your 3rd question).
Authenticode signing certificates don't actually match to a site such as your domain - they only verify the code signer - so you can use it with any site.
A good option for you might be to self-sign your app. This doesn't require a third party, and so no yearly fee.
You can also distribute a trusted XAP without signing it at all, but you won't be able to update it via the Silverlight updating mechanism.

Avoid running of software after copying to next machine?

I have developed a small software. I want to provide and run it commercially only. I want it to be run in the machines who have purchased it from me.
If someone copies it from my clients computer and runs it in next computer, I would like to stop functioning/running the software.
What can be the ways to prevent the piracy of my software?
Adaption of one of my previous answers:
There are a few ways to "activate" copied software to try to stop casual copying of the application.
In the most simplistic case, a registration code ("CD key") purchased from you, possibly via your website, and it is sent to the user who enters it into the program or installer. The whole process can basically be done offline; the program itself locally determines that the code is valid or invalid.
This is nice and easy, but it extremely vulnerable to key sharing - since there's no "phoning home" then the application cannot know that thousands of different people are all using the same key that they got off the internet or a serial library or their friend. It's also reasonably easy to make "keygens" which generate valid-seeming keys that were never actually issued by the developers.
Then we get into online registration. You still have some kind of code, but the program will phone home back to the server to determine whether the code is valid and usually unique. This stops basic key sharing, because the company knows if too many people from all over the world are all using the same key. Perhaps there is some kind of identification involved using MAC address, too, with infinite registrations allowed on the same hardware but maybe a limited number on what appears to be a different computer.
This is still pretty easy and stops simple key sharing. People will actually have to get into cracking the software or faking the server response to get past it.
Sometimes the program itself is partially/mostly encrypted and is only decrypted by the online registration step. Depending on how well this is obfuscated then it can be pretty difficult and time consuming to crack. Bioshock was a high-profile example of this - debuting with a brand new encryption/copy protection scheme that took around two weeks from release to be broken.
Finally, a particularly guarded application might stay in constant contact with the server, refusing to work at all if the connection is severed.
If you know for sure that all your users will all have reliable internet connections then it can be considered quite a strong way to protect the app, at the cost of privacy and some user distrust of the spyware.
In this case to get around the activation they would need to fake the server itself. Steam emulators and private WoW servers are an example of this.
And in the end, nothing is uncrackable.
In a nutshell: you can't.
Even very sofisticated systems (e.g. dongle keys) can be circumvented.
I guess your best call is to give a code to your customers and have an online check for that code, so that it cannot be used twice.
Of course, that can be circumvented too but...
As nico said you really can't.
A simple solution might be to generate (registration/activation) codes that are based on hardware or software installed on the particular computer - eg video card serial id or c:/windows creation time.
I have one idea may be it works.
What we can do, we will make an encorrupted database field and that field will be empty for the first time as soon as i install my software to some machine it will read the Mac Address + Mother Board Serial + Processor ID and make an encorrupted value with the combination of these three and write in to that field which i left empty for the first time use.
After that every time my application will read these three values and recreate the encrupptted value in the same manner and compare with the value of that database field. If the value of the database field and the value of the regenerated encrroupted field is equal, that means the computer is same other wise it is installed on some other machine in this case you delete all the code and can make the system unstable to punish the person also :) ...
Please let me know about your opinion about this idea.
The best way is to use some sort of hardware-locking in which your license code contains encrypted info about the machine on which it will run. Your software will then check for this info and match it with the current computer and if the match is successful, the license is deemed valid.
Sure, any scheme can be cracked by someone on the face of the planet, but that does not mean you shouldn't use a protection scheme.
If you are looking for a ready-made scheme for this, have a look at CryptoLicensing.
Companies such as ours (Wibu-Systems), Safe-Net, and Flexera (expensive) offer dongle-free solutions as well as ones based on hardware. But _simon was right in that a dongle is the only iron-clad protection. All software-based systems can be cracked; it's just that some are more difficult than others. Really good hardware-based solutions are effectively uncrackable. No one has yet cracked the CodeMeter stick unless the implementation was flawed.