I have a question, I am trying to implement the encryption protocol between server and client based on Diffie-Hellman problem.
The problem is when I tried to ((t^RsRc) mod p)^(1/Rc mod q) it is not giving me (t^Rs) mod p.
I have checked even if I am doing
((t^Rc) mod p)^(1/Rc mod q)
t.modPow(Rc, p)).modPow(Rc.modInverse(q), p) it is not giving me t.
Related
I'm particularly interested to understand the K prelude (how it is structured, why its content is like that, how "kompile" calculates dependencies, etc).
The main question is: what is the criterion for a hooked symbol from the K prelude to be copied into the generated Kore file?
Here some examples of potential problems:
The symbol andBool is copied with its associated rewrite rules, which does not seem to be the case for the symbol in_keys, which is simply copied without its rewrite rules.
Other symbols seem to be useless (for the IMP semantic) but exist, with or without its rewrite rules, in the generated Kore file, such as countAllOccurrences, findChar, signExtendBitRangeInt or Float2String.
It seems that SortId is generated by the line syntax Id [token]. However, the lines "syntax Bool ::= "true" [token] and syntax Bool ::= "false" [token] do not generate true and false symbols.
(Moreover, is it a choice that true and false are values and not constructors?)
The sort named SortId is not generated for the following example, whereas some generated hooked symbols depend on this sort. This problem does not exist with the IMP semantic.
module MAX-OW-SYNTAX
imports INT
imports BOOL
syntax Exp ::= Int | "(" Exp ")" [bracket]
| "max" Exp Exp
endmodule
module MAX-OW
imports MAX-OW-SYNTAX
syntax KResult ::= Int
rule max X Y => Y requires X <Int Y
rule max X _ => X [owise]
endmodule
Is it correct that the K prelude is implemented in each language of each backend, and that an implementation in the Kore language is available in the K prelude?
Do you have the necessary interface to implement for a new backend? (For instance, Bag is obsolete, but not Set, List and Map, but I don't know the list of set operators, map operators, etc. that the new backend must provide.)
Is there a reason why andThenBool and andBool have the same semantics once implemented in the Kore syntax (Booleans module)?
Where are the rewrite rules defined for ==Bool, used in the definition of =/=Bool (Booleans module)?
The best reference point for the K internals is the User Manual, along with the K source for the prelude. To respond to your specific questions as best as I can:
in_keys only has simplification rules that apply on symbolic backends. These will not apply on concrete backends, and so those backends use the hooked implementation MAP.in_keys. Some functions (such as andBool) can be implemented both in K and as an efficient backend hook. For example, on the K LLVM backend, andBool is implemented by code generation. If a backend didn't support that hook, the (relatively) inefficient K rewriting implementation would be used.
The Id sort is built in for convenience. It represents program identifiers.
You haven't imported DOMAINS in this example. Doing so will pull in the Id sort and related rewrites.
Very roughly, and largely for internal purposes. Do you have a hypothetical K backend in mind, or is there a way in which the LLVM / Haskell backends provided by K are inadequate for your specific use case?
andThenBool is required to short-circuit its arguments; andBool is permitted to short-circuit, but may evaluate both arguments strictly. An implementation that makes both perform short-circuiting is valid.
==Bool is implemented only in terms of a hook. In domains.md, you can see the hook(BOOL.eq) attribute that indicates how ==Bool is implemented.
Do let us know if you have further questions, or would like help implementing a specific semantics in K.
I've been reading this article on elliptic-curve crypto and how it works:
http://arstechnica.com/security/2013/10/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/
In the article, they state:
It turns out that if you have two points [on an elliptic curve], an initial point "dotted" with itself n times to arrive at a final point [on the curve], finding out n when you only know the final point and the first point is hard.
It goes on to state that the only way to find out n (if you only have the first and final points, and you know the curve eqn), is to repeatedly dot the initial point until you finally have the matching final point.
I think I understand all this - but what confuses me is - if n is the private key, and the final point corresponds to the public key (which I think is the case), then doesn't it take the exact same amount of work to compute the public key from the private, as it does the private from the public (both just have to recursively dot a point on the curve)? am I misunderstanding something about what the article is saying?
The one-way attribute of ECC and RSA is due to the Chinese Reminder Theoreom (CRT). A series of arithmetic divisions where only the remainder is kept (aka Modulo operation %), which results in information loss in the output. As a result, the person with the keys takes one direct path to generate the output - and any would-be attacker has to exhaust a massive number of possible paths in order to determine what key was used to create the output. If the simple division was used instead of a modulo - then key data would be present in the output and it couldn't be used for cryptography.
If you lived in a world where you had a powerful enough computer to exhaust all possibilities - then the CRT wouldn't be useful as a cryptographic primitive. The computers we have now a fairly powerful - so we balance the power of our modern machines with a keysize that introduces enough range of possibilities so that they cannot be exhausted in a timeframe that matters.
The CRT is a subset of the P vs NP problem set - so perhaps proving P=NP may lead to a way of undermining the oneway aspect of asymmetric cryptography. We know that there is a way to factor CRT using a quantum computer running Shor's Algorithm. Shor's Algorithm has proven that we can defeat the so-called "trapdoor", or one-way attributes of CRT, it is still however an expensive attack to conduct.
The following lecture is my favorite description of the CRT. It shows that there are many possible solutions for one direction forcing an attacker to exhaust them all and only one solution for the other:
https://www.youtube.com/watch?v=ru7mWZJlRQg
EDIT: I previously stated that n is not the private key. In your example, n is either server or client private key.
How it works is that there is a starting point known to anybody.
You select random integer k and do the "dotting operation" k-times. Then you send this new point to the server. (k is your private key)
Server does the same with the starting point, but q-times and sends it to you. (q is server's private key)
You take the point you got from server and "dot" it k-times. The final point would be the starting point "dotted" k*q-times.
Server does the same with point it got from you. And again its final point would be the starting point "dotted" q*k-times.
That means the final point (= the starting point "dotted" k*q-times) is the shared secret since all what any attacker would know is the starting point, the starting point dotted k-times and the starting point dotted q-times. And given only those data, it's practically impossible to find the final point as a product of k*q unless any of those known.
EDIT: No, it doesn't take the same time to compute k from G = kP given known values of G (sent point) and P (starting point). More in comment section and:
For rising to power, see Exponentiation by squaring.
For ECC point multiplication, see point multiplication.
When I read descriptions about how DH key exchange works, there's no mention of how the key-exchangers came to an agreement on which "group" (the p and g parameters) should be used to compute the public and private values. Looking at RFC 5114, it seems like there are quite a few choices.
I'd like to know if this negotiation is typically done during the exchange itself, and if not, if there's a description somewhere regarding how the algorithm would be different if it included that step.
Thanks for reading.
The p and g values are safe to pass unencrypted. If client/server is on a network, either the client or server generates the p/g values and passes them via network sockets. As long as the secret number for each client/server is kept a secret (duh..) the Diffie-Hellman exchange can said to be safe as a attacker would have to compute g^(ab) mod p = g^(ba) mod p (which leads to a infinite amount of solutions that is infeasible to compute given that the p value is big enough).
Essentially the most basic D-H exchange goes as follows:
Party A generates p, g, a values. Where g is the base/generator, p is the prime modulo, a is the secret power.
Party B (concurrently) generates secret value b.
Party A computes g^a mod p (we call this value thereafter A)
Party A and sends p, g and A across the transmission medium.
Party B receives p, g, A.
Party B computes g^b mod p (we call this value thereafter B).
Party B sends B across the transmission medium.
Party A receives B.
Party A computes B^a mod p and obtains the shared secret.
Party B (concurrently) computes A^b mod p and obtains the shared secret.
Note: if the p value is too small, it may be computational cheaper to just iterate through 0 to p - 1 but this all depends on what you do after you generate the common secret.
I've got a generic cryptographic implementation using OpenSSL's BIGNUM library in C. Standard decryption is working fine, but i would also like to implement Shamir's secret sharing (SSS).
The problem i've run across is that BIGNUM only supports whole numbers, and as part of the Lagrange interpolation for SSS, i'll need to be multiplying by negative values.
Is there any way to do this? Otherwise: I can do my SSS in another language (python?) so long as it is able to interact with the BIGNUM's produced by OpenSSL.
Any suggestions? TIA!
As you look at BIGNUM structure in OpenSSL, you'll find a flag named neg. If the BIGNUM object represents a negative number, neg will be set to 1. Also, the bn_mul() function handles the multiplication by negative number correctly. So you can implement SSS with OpenSSL, no problem!
Modular arithmetic (using groups) only provides positive results, so I presume you want to use non-modular arithmetic? In that case you could simply keep a separate variable indicating if the value is negative or not. The outcome of positive multiplication is the same except for the sign bit anyway.
It's not as clean a design as possible, but for a few methods it would probably not matter that much. You could create separate methods that mimic the BN methods except for an integer holding the value of the sign (-1, 0 or 1).
My question is about RSA signing.
In case of RSA signing:
encryption -> y = x^d mod n,
decryption -> x = y^e mod n
x -> original message
y -> encrypted message
n -> modulus (1024 bit)
e -> public exponent
d -> private exponent
I know x, y, n and e. Knowing these can I determine d?
If you can factor n = p*q, then d*e ≡ 1 (mod m) where m = φ(n) = (p-1)*(q-1), (φ(m) is Euler's totient function) in which case you can use the extended Euclidean algorithm to determine d from e. (d*e - k*m = 1 for some k)
All these are very easy to compute, except for the factoring, which is designed to be intractably difficult so that public-key encryption is a useful technique that cannot be decrypted unless you know the private key.
So, to answer your question in a practical sense, no, you can't derive the private key from the public key unless you can wait the hundreds or thousands of CPU-years to factor n.
Public-key encryption and decryption are inverse operations:
x = ye mod n = (xd)e mod n = xde mod n = xkφ(n)+1 mod n = x * (xφ(n))k mod n = x mod n
where (xφ(n))k = 1 mod n because of Euler's theorem.
The answer is yes under two conditions. One, somebody factors n. Two, someone slips the algorithm a mickey and convinces the signer to use one of several possible special values for x.
Applied Cryptography pages 472 and 473 describe two such schemes. I don't fully understand exactly how they would work in practice. But the solution is to use an x that cannot be fully controlled by someone who wants to determine d (aka the attacker).
There are several ways to do this, and they all involve hashing x, fiddling the value of the hash in predictable ways to remove some undesirable properties, and then signing that value. The recommended techniques for doing this are called 'padding', though there is one very excellent technique that does not count as a padding method that can be found in Practical Cryptography.
No. Otherwise a private key would be of no use.