How to calculate the complexity of lattice papers in cryptography?What is the result of O(n log(poly(n)))*w(log(poly(n)))? - mathematical-lattices

What is the result of O(n log(poly(n)))*w(log(poly(n)))?
This is a summary of efficiency for lattices used in cryptography.W(log) is Super logarithmic.What does it have to do with O(nlog)?

Related

Effective time complexity for n bit addition and multiplication

I have done a course on Computer Architecture and it was mentioned that on the most efficient processors with n bit architecture word size the addition/subtraction of two words has a time complexity of O(log n) while multiplication/division has a time complexity of O(n).
If you do not consider any particular architecture word size the best time complexity of addition/subtraction is O(n) (https://www.academia.edu/42811225/Fast_Arithmetic_Speeding_up_Multiplication_Division_and_Addition_of_n_Bit_Numbers) and multiplication/division seems to be O(n log n log log n) (Strassen https://en.m.wikipedia.org/wiki/Multiplication_algorithm).
Is this correct?
O(log n) is the latency of addition if you can use n-bit wide parallel hardware with stuff like carry-select or carry-lookahead.
O(n) is the total amount of work that needs doing, and thus the time complexity with a fixed-width ALU for arbitrary bigint problems as n tends towards infinity.
For a multiply, there are n partial products in an n-bit multiply, so adding them all (with a Dadda tree for example) takes on the order of O(log n) gate delays of latency. Integer addition is associative, so you can do that in parallel, e.g. (a+b) + (c+d) is 3 with the latency of 2, and it gets better from there.
Dadda trees can avoid some of the carry-propagation latency so I guess it avoids the extra factor of log n you'd get if you you just used normal addition of each partial product separately.
See Differences between Wallace Tree and Dadda Multipliers for more about practical considerations for huge Dadda trees.

Which one is better in between O(log n) and O(log n^2)?

This is a Question that my (Data Structure)course teacher did in a Class Test. What would be the proper answer here? Since log n^2 =2 log n , as far as I know in a time complexity it could be written as O(log n) since constant multipliers cancels out. Then is one better than the other in any specific way?
Asymptotically they are the same.
Your reasoning is right, O(log n^2) can be simplified to O(log n) and obviously they are equals.
It's like you have two algorithms that works on an array, the first is O(n) and the second is O(2n).
If you look to the number of performed operation, the second performs double the operation of the first but this is not important for the Asymptotic notation.
They are in the same order that is O(n).
In your specific example the order is O(log n) and they can be considered the same.
I would agree with you that any O(log(x^k)) is O(log(x)). The computational complexity scales the same.

How to prove the time complexity of quicksort is O(nlogn)

I don't understand the proof given in my textbook that the time complexity of Quicksort is O(n log n). Can anyone explain how to prove it?
Typical arguments that Quicksort's average case running time is O(n log n) involve arguing that in the average case each partition operation divides the input into two equally sized partitions. The partition operations take O(n) time. Thus each "level" of the recursive Quicksort implementation has O(n) time complexity (across all the partitions) and the number of levels is however many times you can iteratively divide n by 2, which is O(log n).
You can make the above argument rigorous in various ways depending on how rigorous you want it and the background and mathematical maturity etc, of your audience. A typical way to formalize the above is to represent the the number of comparisons required by the average case of a Quicksort call as a recurrence relation like
T(n) = O(n) + 2 * T(n/2)
which can be proved to be O(n log n) via the Master Theorem or other means.

O(nm + n2 log n) complexity time polynomial time?

If an algorithm computes in O(nm + n^2 log n) time. Can you then say it computes in Polynomial time?
I know that O(n log n) is O(n^2) so polynomial time. Just not sure how the n^2 works..
Remember that O(n) means "upper-bounded". If a function T(n) is O(n), then n*T(n) is O(n2).
Of course, you can also multiply T(n) by some other function that is O(n) --- not necessarily f(n)=n. So T(n)*O(n) is also O(n2).
If you know that O(n logn) is O(n2), then you can multiply both by a function that is O(n) and arrive to the conclusion that O(n2 logn) is O(n3), which is polynomial.
Finally, if O(a), O(b) and O(c) are all polynomials, then O(a+b+c) is also polynomial. Because they can all be upper-bounded by the term that grows faster.

What is the worst case time complexity of median of medians quicksort?

What is the worst case time complexity of median of medians quicksort ( pivot is determined by the median of medians which take O(n) time to find )?
According to Wiki,
The approximate median-selection algorithm can also be used as a pivot strategy in quicksort, yielding an optimal algorithm, with worst-case complexity O(n log n).
This is because the median of medians algorithm prevents the bad partitioning that would occur in naive quicksort on an already sorted array.