Modular arithmetic - cryptography

I'm new to cryptography and modular arithmetic. So, I'm sure it's a silly question, but I can't help it.
How do I calculate a from
pow(a,q) = 1 (mod p),
where p and q are known? I don't get the "1 (mod p)" part, it equals to 1, doesn't it? If so, than what is "mod p" about?
Is this the same as
pow(a,-q) (mod p) = 1?

The (mod p) part refers not to the right hand side, but to the equality sign: it says that modulo p, pow(a,q) and 1 are equal. For instance, "modulo 10, 246126 and 7868726 are equal" (and they are also both equal to 6 modulo 10): two numbers x and y are equal modulo p if they have the same remainder on dividing by p, or equivalently, if p divides x-y.
Since you seem to be coming from a programming perspective, another way of saying it is that pow(a,q)%p=1, where "%" is the "remainder" operator as implemented in several languages (assuming that p>1).
You should read the Wikipedia article on Modular arithmetic, or any elementary number theory book (or even a cryptography book, since it is likely to introduce modular arithmetic).
To answer your other question: there is no general formula for finding such an a (to the best of my knowledge) in general. Assuming that p is prime, and using Fermat's little theorem to reduce q modulo p-1, and assuming that q divides p-1 (or else no such a exists), you can produce such an a by taking a primitive root of p and raising it to the power (p-1)/q. [And more generally, when p is not prime, you can reduce q modulo φ(p), then assuming it divides φ(p) and you know a primitive root (say r) mod p, you can take r to the power of φ(p)/q, where φ is the totient function -- this comes from Euler's theorem.]

Not silly at all, as this is the basis for public-key encryption. You can find an excellent discussion on this at http://home.scarlet.be/~ping1339/congr.htm#The-equation-a%3Csup%3Ex.
PKI works by choosing p and q that are large and relatively prime. One (say p) becomes your private key and the other (q) is your public key. The encryption is "broken" if an attacker guesses p, given aq (the encrypted message) and q (your public key).
So, to answer your question:
aq = 1 mod p
This means aq is a number that leaves a remainder of 1 when divided by p. We don't care about the integer portion of the quotient, so we can write:
aq / p = n + 1/p
for any integer value of n. If we multiply both sides of the equation by p, we have:
aq = np + 1
Solving for a we have:
a = (np+1)1/q
The final step is to find a value of n that generates the original value of a. I don't know of any way to do this other than trial and error -- which equates to a "brute force" attempt to break the encryption.

Related

Unnormalizing in Knuth's Algorithm D

I'm trying to implement Algorithm D from Knuth's "The Art of Computer Programming, Vol 2" in Rust although I'm having trouble understating how to implement the very last step of unnormalizing. My natural numbers are a class where each number is a vector of u64, in base u64::MAX. Addition, subtraction, and multiplication have been implemented.
Knuth's Algorithm D is a euclidean division algorithm which takes two natural numbers x and y and returns (q,r) where q = x / y (integer division) and r = x % y, the remainder. The algorithm depends on an approximation method which only works if the first digit of y is greater than b/2, where b is the base you're representing the numbers in. Since not all numbers are of this form, it uses a "normalizing trick", for example (if we were in base 10) instead of doing 200 / 23, we calculate a normalizer d and do (200 * d) / (23 * d) so that 23 * d has a first digit greater than b/2.
So when we use the approximation method, we end up with the desired q but the remainder is multiplied by a factor of d. So the last step is to divide r by d so that we can get the q and r we want. My problem is, I'm a bit confused at how we're suppose to do this last step as it requires division and the method it's part of is trying to implement division.
(Maybe helpful?):
The way that d is calculated is just by taking the integer floor of b-1 divided by the first digit of y. However, Knuth suggests that it's possible to make d a power of 2, as long as d * the first digit of y is greater than b / 2. I think he makes this suggestion so that instead of dividing, we can just do a binary shift for this last step. Although I don't think I can do that given that my numbers are represented as vectors of u64 values, instead of binary.
Any suggestions?

Problem determining the bit length of a key from the modulus in the RSA algorithm

Here are two 64-bit (signed) integers
p = 13776308150928489016
q = 16488138731131959619
and their product
n = 112488352363349635896748360565917156710
The bit-length of the product is floor ((log2 n) + 1) or 127.
Now here are another two 64-bit integers
p = 13275629912622491628
q = 16290498985329101221
and their product
n = 179030914337714357408535416678431567970
but this time the bit length is floor ((log2 n) + 1) or 128.
The reason is that there's a leading zero in the first integer, which makes the space needed to represent the integer in memory one bit smaller.
The problem this causes is that I can't determine the bit length of the keys accurately. For example, here are is a very short RSA key pair:
Public key : 7, 8371846783263706079
Private key : 2989945277626202443, 8371846783263706079
The modulus (8371846783263706079) is 63 bits, which the number I'm after is 64. The overcome this issue I have considered the following solutions:
Round up to the nearest 2^n
Store the key size in bits along with the key
Add some kind of padding to ensure all integers take up the same space (not sure how this would work in practice)
Which one is the correct solution?
As #r3mainer notes, the math needed here -- inequalities -- is not exotic. As to what tutorials say, well, they're just tutorials, they're trying to simplify as much as possible so they leave out some details.
What you are observing is the following:
you want two primes, p and q, to have the same bit length k and their product N to have a bit length of 2k.
By the definition of what it means to have a bit length of k, we have the following inequality:
1) 2(k-1) <= p, q < 2k.
However, when we multiply p and q we discover a problem:
2) 2(2k - 2) <= N < 22k
This means that N=p*q may end up having bit length of 2k-1 or 2k, but we don't want 2k-1.
In your example k=64.
To fix it, we need to tighten up the lower bound on p and q to the following:
3) sqrt(2(2k-1)) <= p, q < 2k.
Bearing in mind that all results are integers, we apply the ceiling function and get finally
4) ceiling(sqrt(2(2k-1))) <= p, q < 2k.
For k=64 this works out to:
13043817825332782213 <= p, q < 264
An even simpler formulation is make the bounds dynamic, as in the following:
first find p, of any size. Then we want
2(2k - 1) <= p*q < 22k, so
5) (2(2k - 1))/ p <= q < (22k)/p will do the trick.
For RSA, we actually do want both primes to be sufficiently large and entropic, and yet not be too close to each other. We can do that by choosing p to have length k-1 or k-2 and applying 5).

Is this O(N) algorithm actually O(logN)?

I have an integer, N.
I denote f[i] = number of appearances of the digit i in N.
Now, I have the following algorithm.
FOR i = 0 TO 9
FOR j = 1 TO f[i]
k = k*10 + i;
My teacher said this is O(N). It seems to me more like a O(logN) algorithm.
Am I missing something?
I think that you and your teacher are saying the same thing but it gets confused because the integer you are using is named N but it is also common to refer to an algorithm that is linear in the size of its input as O(N). N is getting overloaded as the specific name and the generic figure of speech.
Suppose we say instead that your number is Z and its digits are counted in the array d and then their frequencies are in f. For example, we could have:
Z = 12321
d = [1,2,3,2,1]
f = [0,2,2,1,0,0,0,0,0,0]
Then the cost of going through all the digits in d and computing the count for each will be O( size(d) ) = O( log (Z) ). This is basically what your second loop is doing in reverse, it's executing one time for each occurence of each digits. So you are right that there is something logarithmic going on here -- the number of digits of Z is logarithmic in the size of Z. But your teacher is also right that there is something linear going on here -- counting those digits is linear in the number of digits.
The time complexity of an algorithm is generally measured as a function of the input size. Your algorithm doesn't take N as an input; the input seems to be the array f. There is another variable named k which your code doesn't declare, but I assume that's an oversight and you meant to initialise e.g. k = 0 before the first loop, so that k is not an input to the algorithm.
The outer loop runs 10 times, and the inner loop runs f[i] times for each i. Therefore the total number of iterations of the inner loop equals the sum of the numbers in the array f. So the complexity could be written as O(sum(f)) or O(Σf) where Σ is the mathematical symbol for summation.
Since you defined that N is an integer which f counts the digits of, it is in fact possible to prove that O(Σf) is the same thing as O(log N), so long as N must be a positive integer. This is because Σf equals how many digits the number N has, which is approximately (log N) / (log 10). So by your definition of N, you are correct.
My guess is that your teacher disagrees with you because they think N means something else. If your teacher defines N = Σf then the complexity would be O(N). Or perhaps your teacher made a genuine mistake; that is not impossible. But the first thing to do is make sure you agree on the meaning of N.
I find your explanation a bit confusing, but lets assume N = 9075936782959 is an integer. Then O(N) doesn't really make sense. O(length of N) makes more sense. I'll use n for the length of N.
Then f(i) = iterate over each number in N and sum to find how many times i is in N, that makes O(f(i)) = n (it's linear). I'm assuming f(i) is a function, not an array.
Your algorithm loops at most:
10 times (first loop)
0 to n times, but the total is n (the sum of f(i) for all digits must be n)
It's tempting to say that algorithm is then O(algo) = 10 + n*f(i) = n^2 (removing the constant), but f(i) is only calculated 10 times, each time the second loops is entered, so O(algo) = 10 + n + 10*f(i) = 10 + 11n = n. If f(i) is an array, it's constant time.
I'm sure I didn't see the problem the same way as you. I'm still a little confused about the definition in your question. How did you come up with log(n)?

Big O notation and measuring time according to it

Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.

Why does the security of RSA depend on the non-factorability of the modulus n?

Just wondering why does the security of RSA depend on the non-factorability of the modulus n?
Cheers!
well ... the non-factorability of the modulus n is not the whole story ...
as vlad already pointed out, you can easily calculate the private exponent if you know the factors of n ...
(p-1)(q-1) ... or more in general... if you know the prime factors P[i] of a number n, then you can calculate the product of all (P[i] - 1)... that is eulers PHI function ... to know the number of invertible multiplicative elements mod n
if you can factorize n, that calculation becomes trivial ... if n consists of only 2 large primes, and that factorization is hard, that isn't really trivial ...
however ... if you come up with another idea of calculating PHI(n) ... the number of elements mod n that have a multiplicative inverse ... factorization would probably no longer be your problem ...
currently there is no other public known way of calculating phi, than eulers way ... prod(P[i] - 1)
so either finding a way to factorize, or calculating PHI(n) a different way, would probably lead to breaking RSA
The public data in RSA is n - the public modulus, and e - the public exponent. The secret is d - the private exponent.
When creating the parameters you first generate two random primes p and q and then compute the public modulus n = p*q. So p and q are the factorization of n. Actually you could use more primes, but most use just two.
Then you choose the public exponent e, which is usually a small prime such as 65537 or 17 or even 3.
Your secret exponent d would then be d = 1/e mod (p-1)(q-1).
So clearly anyone could compute d if they knew p and q, which is the factorization.