ClientSide Sending Key Example: tabc-xkaf-gaga-gtax to the Server
Server Checks if Key Exists in Database if YES then return TRUE as Response
ClientSide IF RESPONSE = TRUE THEN
OPEN FORM1
But thats not a Secure way to do it cause they can change the Response of the ServerSide check and then get the Product for free cause it will open Form1 anyone has a better way to do it?
Signed server side response and verify server response at client side to check whether response was altered or not.
Refer this article for how to digitally signed and verify
Moreover this will be a typical solution and suit for very complex security .
For simple solution, You can send hash of response along with server response and convert response to hash at client side. compare both hash , if they match it means response was not changed.
As you indicated, professional attackers can see and decode VB.Net applications, and hence, as an initial conclusion, you cannot reliably protect your code. In the first step, you must encrypt your code by using several encryption techniques such as the one mentioned by #Always_a_learner. However, this will not 100% protect your code from reverse-engineering (A good discussion could be found here).
A good trick in such cases is to make some intentional dependencies. For example, some core calculations should be done by the server (if possible) and only the result should be returned to the client. More explanation, for core calculations, the client should send a request to the server, and the server first verifies the requester (the sender) validity state, and if she is a valid user, then runs calculations and returns results to the user.
This solution is fine only if you can trade-off between speed and security.
When it comes to query databases, you can rely on SQL or when exchanging emails you can rely on SMTP.
They are both standards across vendors. We could generalize this to many other "standardized services".
This allows the backing provider of the service to be switched easily. I can change a local MySQL to a managed one, or I can switch an SMTP by another just changing the config of the application without changing the code.
Question:
When it comes to "store binaries", like for example say a company that needs to store photos sent by users, PDFs sent by providers and phone-call recordings from the quality department...
...when it's about "storing binaries over the net" is there any industry-standard in the form of "key-value-metadata" where the value is the binary, the metadata is a free-object (with possibly info about creation time, who is the creator, context of the creation, etc); and the ID is a function of the content (for example a Sha1 or whatever), that ensures anti-duplication measures by design?
When I say industry-standard I mean that I don't care if "behind" my connecting API, the "real" storage is implemented by a redis or a mysql or a plain-file-system or an S3 from AWS... just something "standard" given by multiple parties, to where I connect to and I can "easily change what's behind" by configuration and change from one storage-provider to another with the same simplicity we change our SMTP for example not caring if there's a sendmail or a postfix behind.
I recently came through the concept of ETag HTTP header. (this) But I still have a problem that for a particular HTTP resource who is responsible to generate ETags?
In other words, it is actual application, container (Ex:Tomcat), Web Server/Load balancer (Ex: Apache/Nginx)?
Can anyone please help?
Overview of typical algorithms used in webservers.
Consider we have a file with
Size 1047 i.e. 417 in hex.
MTime i.e. last modification on Mon, 06 Jan 2020 12:54:56 GMT which
is 1578315296 seconds in unix time or 1578315296666771000 nanoseconds.
Inode which is a physical file number 66 i.e. 42 in hex
Different webservers returns ETag like:
Nginx: "5e132e20-417" i.e. "hex(MTime)-hex(Size)". Not configurable.
BusyBox httpd the same as Nginx
monkey httpd the same as Nginx
Apache/2.2: "42-417-59b782a99f493" i.e. "hex(INode)-hex(Size)-hex(MTime in nanoseconds)". Can be configured but MTime anyway will be in nanos
Apache/2.4: "417-59b782a99f493" i.e. "hex(Size)-hex(MTime in nanoseconds)" i.e. without INode which is friendly for load balancing when identical file have different INode on different servers.
OpenWrt uhttpd: "42-417-5e132e20" i.e. "hex(INode)-hex(Size)-hex(MTime)". Not configurable.
Tomcat 9: W/"1047-1578315296666" i.e. Weak"Size-MTime in milliseconds". This is incorrect ETag because it should be strong as for a static file i.e. octal compatibility.
LightHTTPD: "hashcode(42-1047-1578315296666771000)" i.e. INode-Size-MTime but then reduced to a simple integer by hashcode (dekhash). Can be configured but you can only disable one part (etag.use-inode = "disabled")
MS IIS: it have a form Filetimestamp:ChangeNumber e.g. "53dbd5819f62d61:0". Not documented, not configurable but can be disabled.
Jetty: based on last mod, size and hashed. See Resource.getWeakETag()
Kitura (Swift): "W/hex(Size)-hex(MTime)" StaticFileServer.calculateETag
Few thoughts:
Hex numbers are used here so often because it's cheap to convert a decimal number to a shorter hex string.
Inode while adding more guarantees makes load balancing not possible and very fragile if you simply copied the file during application redeploy.
MTime in nanoseconds is not available on all platforms and such granularity not needed.
Apache have a bug about this like https://bz.apache.org/bugzilla/show_bug.cgi?id=55573
The order MTime-Size or Size-MTime is also matters because MTime is more likely changed so comparing ETag string may be faster for a dozen CPU cycles.
Even if this is not a full checksum hash but definitely not a weak ETag. This is enough to show that we expect octal compatibility for Range requests.
Apache and Nginx shares almost all traffic in Internet but most static files are shared via Nginx and it is not configurable.
It looks like Nginx uses the most reasonable schema so if you implementing try to make it the same.
The whole ETag generated in C with one line:
printf("\"%" PRIx64 "-%" PRIx64 "\"", last_mod, file_size)
My proposition is to take Nginx schema and make it as a recommended ETag algorithm by W3C.
As with most aspects of the HTTP specification, the responsibility ultimately lies with whoever is providing the resource.
Of course, it's often that case that we use tools—servers, load balancers, application frameworks, etc.—that help us fulfill those responsibilities. But there isn't any specification defining what a "web server", as opposed to the application, is expected to provide, it's just a practical question of what features are available in the tools you're using.
Now, looking at ETags in particular, a common situation is that the framework or web server can be configured to automatically hash the response (either the body or something else) and put the result in the ETag. Then, on a conditional request, it will generate a response and hash it to see if it has changed, and automatically send the conditional response if it hasn't.
To take two examples that I'm familiar with, nginx can do this with static files at web server level, and Django can do this with dynamic responses at the application level.
That approach is common, easy to configure, and works pretty well. In some situations, though, it might not be the best fit for your use case. For example:
To compute a hash to compare to the incoming ETag you first have to have a response. So although the conditional response can save you the overhead of transmitting the response, it can't save you the cost of generating the response. So if generating your response is expensive, and you have an alternative source of ETags (for example, version numbers stored in the database), you can use that to achieve better performance.
If you're planning to use the ETags to prevent accidental overwrites with state-changing methods, you will probably need to add your own application code to make your compare-and-set logic atomic.
So in some situations you might want to create your ETags at the application level. To take Django as an example again, it provides an easy way for you to provide your own function to compute ETags.
In sum, it's ultimately your responsibility to provide the ETags for the resources you control, but you may well be able to take advantage of the tools in your software stack to do it for you.
The idea was to generate a random key for every file being uploaded, pass this key to S3 in order to encrypt it and store the key in the database. Once the user wants to access the file, the key is read from the database and passed to S3 once again.
The first part works. My objects are uploaded and encrypted successfully, but I have issues with retrieving them.
Retrieving files with request headers set:
When setting the request headers such as x-amz-server-side-encryption-customer-algorithm etc. when performing the GET request to the resource, works, and I am able to access it. But since I want to these resources as src to an <img>-Tag, I cannot perform GET requests which require headers to be set.
Thus, I thought about:
Pre signing urls:
To create a pre signed url, I built the HMAC SHA1 of the required string and used it as a signature. The calculated signature is accepted by S3 but I get the following error when requesting the pre signed URL:
Requests specifying Server Side Encryption with Customer provided keys must provide an appropriate secret key.
The URL has the form:
https://s3-eu-west-1.amazonaws.com/bucket-id/resource-id?x-amz-server-side-encryption-customer-algorithm=AES256&AWSAccessKeyId=MyAccessKey&Expires=1429939889&Signature=GeneratedSignature
The reason why the error is shown seems to be pretty clear to me. At no point in the signing process was the encryption key used. Thus, the request cannot work. As a result, I added the encryption key as Base64, and Md5 representation as parameters to the URL. The URL now has the following format:
https://s3-eu-west-1.amazonaws.com/bucket-id/resource-id?x-amz-server-side-encryption-customer-algorithm=AES256&AWSAccessKeyId=MyAccessKey&Expires=1429939889&Signature=GeneratedSignature&x-amz-server-side-encryption-customer-key=Base64_Key&x-amz-server-side-encryption-customer-key-MD5=Md5_Key
Although the key is now present (imho), I do get the same error message.
Question
Does anyone know, how I can access my encrypted files with a GET request which does not provide any headers such as x-amz-server-side-encryption-customer-algorithm?
It seems intuitive enough to me that what you are trying should have worked.
Apparently, though, when they say "headers"...
you must provide all the encryption headers in your client application.
— http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html#sse-c-how-to-programmatically-intro
... they do indeed actually mean headers and S3 doesn't accept these particular values when delivered as part of the query string, as you would expect, since S3 sometimes is somewhat flexible in that regard.
I've tested this, and that's the conclusion I've come to: doing this isn't supported.
A GET request with x-amz-server-side-encryption-customer-algorithm=AES256 included in the query string (and signature), along with the X-Amz-Server-Side-Encryption-Customer-Key and X-Amz-Server-Side-Encryption-Customer-Key-MD5 headers does work as expected... as I believe you've discovered... but putting the key and key-md5 in the query string, with or without including it in the signature seems like a dead end.
It seemed somewhat strange, at first, that they wouldn't allow this in the query string, since so many other things are allowed there... but then again, if you're going to the trouble of encrypting something, there seems little point in revealing the encryption key in a link... not to mention that the key would then be captured in the S3 access logs, leaving the encryption seeming fairly well pointless all around -- and perhaps that was their motivation for requiring it to actually be sent in the headers and not the query string.
Based on what I've found in testing, though, I don't see a way to use encrypted objects with customer-provided keys in hyperlinks, directly.
Indirectly, of course, a reverse proxy in front of the S3 bucket could do the translation for you, taking the appropriate values from the query string and placing them into the headers, instead... but it's really not clear to me what's to be gained by using customer-provided encryption keys for downloadable objects, compared to letting S3 handle the at-rest encryption with AWS-managed keys. At-rest encryption is all you're getting either way.
[Disclaimer: I know, if you know anything about crypto you're probably about to tell me why I'm doing it wrong - I've done enough Googling to know this seems to be the typical response.]
Suppose the following: you have a central authority that wants to issue login cookies for a given domain. On this domain, you don't necessarily trust everyone, but you have a few key end-points who should be able to read the cookie. I say a few, but in practice this number of "trusted" partners may be large. The cookie doesn't contain much information - a username, a timestamp, an expiry, a random number. It should remain small of course, for performance reasons, even after encryption (within reason). Now, there are two security issues:
1) We don't trust every webserver on this domain with user data. For this reason, the ability to read the cookie should be restricted to these trusted partners.
2) While we trust these partners to protect our user's data, we'd still like the central point of authority to be unforgeable (again, within reason).
Now, if we generate a private RSA key for the authority and keep it secret, and distribute the public key only to the "trusted partners", we should be able to encrypt with the private key and have it readable by anyone with the public key. What I'm unclear on is, would it still be necessary to sign the message, or would the act of decrypting be evidence that it was generated with the private key? Is this any way in which this scheme would be better or worse than disseminating a symmetric key to all parties involved and using that to encrypt, while using the private key merely to sign? And of course feel free to tell me all the ways this is a stupid idea, but bear in mind that practical arguments will probably be more convincing than rehashing Alice and Bob.
Oh, and implementation pointers would be welcome, though one can find the basics on Google, if there are any "gotchas" involved that would be useful!
Nate Lawson explains here and here why you can't securely use the public key as a closely-held secret decryption key (it's a subtle point, and a mistake plenty of others have made before you, so don't feel bad!).
Just use your public key to sign for authenticity, and a separate symmetric key for the secrecy.
I've read enough on interesting attacks against public key systems, and RSA in particular, that I agree absolutely with this conclusion:
Public key cryptosystems and RSA in
particular are extremely fragile. Do
not use them differently than they
were designed.
(That means: Encrypt with the public key, sign with the private key, and anything else is playing with fire.)
Addendum:
If you're interesting in reducing the size of the resulting cookies, you should consider using ECDSA rather than RSA to produce the signatures - ECDSA signatures are considerably smaller than RSA signatures of an equivalent security factor.
In cryptography you are what you know. In your scenario, you have a central authority which is able to issue your cookies, and you want no other entity to be able to do the same. So the central authority must "know" some private data. Also, you want the "trusted web servers" to be able to access the contents of the cookies, and you do not want just anybody do read the cookies. Thus, the "trusted web servers" must also have their own private data.
The normal way would be that the authority applies a digital signature on the cookies, and that the cookies are encrypted with a key known to the trusted web servers. What your are thinking about looks like this:
There is a RSA modulus n and the two usual RSA exponents d and e (such that ed = 1 modulo p-1 and q-1 where n=pq). The central authority knows d, the trusted web servers know e, the modulus n is public.
The cookie is processed by the central authority by padding it into an integer c modulo n, and computing s = c^d mod n.
The trusted web servers access the cookie data by computing c = s^e mod n.
Although such a scheme may work, I see the following problems in it:
For basic security, e must be large. In usual RSA descriptions, e is the public exponent and is small (like e = 3). A small exponent is no problem when it is public, but since you do not want cookie contents to be accessible by third parties, you must make e big enough to resist exhaustive search. At the same time, trusted web servers must not know p and q, only n. This means that trusted web servers will need to compute things with a big modulus and a big exponent, and without knowing the modulus factors. This seems a minor point but it disqualifies many RSA implementation libraries. You will be "on your own", with big integers (and all the implementation issues known as "side-channel leaks").
Resistance of RSA signatures, and resistance of RSA encryption, have been well studied, but not together. It so happens that the padding is essential, and you do not use the same padding scheme for encryption and for signatures. Here, you want a padding scheme which will be good for both signature and encryption. Cryptographers usually consider that good security is achieved when hundreds of trained cryptographers have looked at the scheme for a few years, and found no blatant weakness (or fixed whatever weaknesses were found). Home-cooked schemes almost always fail to achieve security.
If many people know a secret, then it is not a secret anymore. Here, if all web servers know e, then you cannot revoke a trusted web server privileges without choosing a new e and communicating the new value to all remaining trusted web servers. You would have that problem with a shared symmetric key too.
The problem of ensuring confidentiality and verifiable integrity in the same type is currently being studied. You may look up signcryption. There is no established standard yet.
Basically I think you will be happier with a more classical design, with digital signatures used only for signing, and (symmetric or asymmetric) encryption for the confidentiality part. This will allow you to use existing libraries, with the least possible homemade code.
For the signature part, you may want to use DSA or ECDSA: they yield much shorter signatures (typically 320 bits for a DSA signature of security equivalent to a 1024-bit RSA signature). From the central authority point of view, ECDSA also allows better performance: on my PC, using a single core, OpenSSL crunches out more than 6500 ECDSA signatures per second (in the P-192 NIST curve) and "only" 1145 RSA signatures per second (with a 1024-bit key). The ECDSA signatures consist in two 192-bit integers, i.e. 384 bits to encode, while the RSA signatures are 1024-bit long. ECDSA in P-192 is considered at least as strong, and probably stronger, than RSA-1024.
You should use a digital sigantures scheme of some sort, or some other mechanism that is aimed to solve the integrity problem of your scenario.
The encryption itself isn't enough.
How would you know that the decrypted messege is what it should be?
Decrypting a cookie encrypted with the right key will surely provide a "valid" cookie, but what happens when you decrypt a cookie encrypted with the wrong key? or just some meaningless data? well, you might just get a cookie that looks valid! (the timestamps are in the range you consider valid, the username is legal, the random number is... uh... a number, etc.).
In most asymmetric encryption algorithms I know off, there is no built-in validation. That means that decrypting a message with the wrong key will not "fail" - it will only give you a wrong plaintext, which you must distinguish from a valid plaintext. This is where integrity comes to play, most commonly - using digital signatures.
BTW, RSA is long studied and has several "gotchas", so if you plan to implement it from scratch you better read ahead on how to avoid creating "relatively easy to break" keys.
Public keys are by definition, public. If you're encrypting with a private key and decrypting with a public key, that's not safe from prying eyes. All it says: "this data is coming from person X who holds private key X" and anyone can verify that, because the other half of the key is public.
What's to stop someone you don't trust putting public key X on a server you don't trust?
If you want a secure line of communication between two servers, you need to have all of those trusted servers have their own public/private key pairs, we'll say key pair Y for one such server.
Server X can then encrypt a message with private key X and public key Y. This says "server X sent a message that only Y could read, and Y could verify it was from X."
(And that message should contain a short-lived symmetric key, because public key crypto is very time-consuming.)
This is what SSL does. It uses public key crypto to set up a session key.
That being said, use a library. This stuff is easy to screw up.
The answer to your question "would the act of decrypting be evidence that it was generated with the private key", the answer is yes if the recipient can do simple validation of the data. Say you have "User name : John, timestamp : <number>, expiry : dd/mm/yyyy". Now if a wrong public key is used to decrypt, the probability that you will get "User name : <some letters>, timestamp : <only numbers>, expiry : ??/??/????" is zero. You can validate using a regular expression (regex) like "User name : [a-zA-Z]+, timestamp : [0-9]+, expiry : .... " and drop it validation fails. You can even check if the day is between 1 and 31, month is between 1 and 12 but you won't get to use it as regex will typically fail at "User name : " if wrong public key is used. If validation succeeds you still have to check the timestamp and ensure that you don't have a replay attack.
However, consider the following:
RSA public key crypto is not designed for bulk encryption of structured data as it can be exploited by attacker. public key crypto is typically used in 2 ways: 1) for securely transporting the symmetric key (which by definition has no structure) which will be used in bulk encryption; this is done by encrypting the symmetric key using the public key and 2) digitally signing a document by encrypting not the document but the hash of the document (again something which has no structure) using the private key. In your case you are encrypting the cookie which has a definite structure.
You are depending on the public key not getting into the hands of the wrong person or entity.
public key encryption is about 1000 times slower that symmetric key encryption. This may not be a factor in you case.
If you still want to use this approach, and are able to distribute the public key only to the "trusted partners", you should generate a random session key (i.e. symmetric key), encrypt it using the private key and send to all recipients who have the public key. You then encrypt the cookie using the session key. You can change the session key periodically. More infrequently you can change the public key also: you can generate a new public/private key pair and encrypt public key with the session key and send it to all recipients.
I presume you trust the 'trusted partners' to decrypt and verify the cookie, but don't want them to be able to generate their own cookies? If that's not a problem, you can use a much simpler system: Distribute a secret key to all parties, and use that to both encrypt the cookie and generate an HMAC for it. No need for public key crypto, no need for multiple keys.
As an alternative to your key distribution approach, which may or may not be suitable in your application, consider using Kerberos, which uses symmetric key encryption, a single highly protected bastion server that controls all the keying material, and a clever set of protocols (See the Needham-Schroder protocol)