What is the time complexity with big-o notation of logics operators like OR, AND, NOT ?
Can they be expressed with this notation ?
Example :
100111001 OR 10111100001
1011000 AND 111111
Ripple-carry mechanics aside (which are only noticeable if your cpu is built in let's say minecraft), you can consider those operations O(1).
Edit: This is of course the case for when the amount of bytes per operand doesn't exceed what the platform can stuff in one operation. If your input is let's say 17 bits for each operand, then a CPU that can only do 16-bit operations maximum can't perform this action with 1 operation. I suppose with the amount of times the operation is required to be performed as the letter "n", the big-o notation would be O(n) in that case.
Related
I have read many explanations of amortized analysis and how it differs from average-case analysis. However, I have not found a single explanation that showed how, for a particular example for which both kinds of analysis are sensible, the two would give asymptotically different results.
The most wide-spread example of amortized running time analysis shows that appending an element to a dynamic array takes O(1) amortized time (where the running time of the operation is O(n) if the array's length is an exact power of 2, and O(1) otherwise). I believe that, if we consider all array lengths equally likely, then the average-case analysis will give the same O(1) answer.
So, could you please provide an example to show that amortized analysis and average-case analysis may give asymptotically different results?
Consider a dynamic array supporting push and pop from the end. In this example, the array capacity will double when push is called on a full array and halve when pop leaves the array size 1/2 of the capacity. pop on an empty array does nothing.
Note that this is not how dynamic arrays are "supposed" to work. To maintain O(1) amortized complexity, the array capacity should only halve when the size is alpha times the capacity, for alpha < 1/2.
In the bad dynamic array, when considering both operations, neither has O(1) amortized complexity, because alternating between them when the capacity is near 2x the size can produce Ω(n) time complexity for both operations repeatedly.
However, if you consider all sequences of push and pop to be equally likely, both operations have O(1) average time complexity, for two reasons:
First, since the sequences are random, I believe the size of the array will mostly be O(1). This is a random walk on the natural numbers.
Second, the array will be near size a power of 2 only rarely.
This shows an example where amortized complexity is strictly greater than average complexity.
They never have different asymptotically different results. average-case means that weird data might not trigger the average case and might be slower. asymptotic analysis means that even weird data will have the same performance. But on average they'll always have the same complexity.
Where they differ is the worst-case analysis. For algorithms where slowdowns come every few items regardless of their values, then the worst-case and the average-case are the same, and we often call this "asymptotic analysis". For algorithms that can have slowdowns based on the data itself, the worst-case and average-case are different, and we do not call either "asymptotic".
In "Pairing Heaps with Costless Meld", the author gives a priority queue with O(0) time per meld. Obviously, the average time per meld is greater than that.
Consider any data structure with worst-case and best-case inserts and removes taking I and R time. Now use the physicist's argument and give the structure a potential of nR, where n is the number of values in the structure. Each insert increases the potential by R, so the total amortized cost of an insert is I+R. However, each remove decreases the potential by R. Thus, each removal has an amortized cost of R-R=0!
The average cost is R; the amortized cost is 0; these are different.
There is a closed form for the Fibonacci sequence that can be obtained via generating functions. It is:
f_n = 1/sqrt(5) (phi^n-\psi^n)
For what the terms mean, see the link above or here.
However, it is discussed here that this closed form isn't really used in practice because it starts producing the wrong answers when n becomes around a hundred and larger.
But in the answer here, it seems one of the methods employed is fast matrix exponentiation which can be used to get the nth Fibonacci number very efficiently in O(log(n)) time.
But then, the closed form expression involves a bunch of terms that are raised to the nth power. So, you could calculate all those terms with fast exponentiation and get the result efficiently that way. Why would fast exponentiation on a matrix be better than doing it on scalars that show up in the closed-form expression? And besides, looking for how to do fast exponentiation of a matrix efficiently, the accepted answer here suggests we convert to the diagonal form and do it on scalars anyway.
The question then is - if fast exponentiation of a matrix is good for calculating the nth Fibonacci number in O(log(n)) time, why isn't the closed form a good way to do it when it involves fast exponentiation on scalars?
The "closed form" formula for computing Fibonacci numbers, you need to raise irrational numbers to the power n, which means you have to accept using only approximations (typically, double-precision floating-point arithmetic) and therefore inaccurate results for large numbers.
On the contrary, in the "matrix exponentiation" formula for computing Fibonacci numbers, the matrix you are raising to the power n is an integer matrix, so you can do integer calculations with no loss of precision using a "big int" library to do arithmetic with arbitrarily large integers (or if you use a language like Python, "big ints" are the default).
So the difference is that you can't do exact arithmetic with irrational numbers but you can with integers.
Note that "In practice" here is referring to competitive programming (in reality, you basically never want to compute massive fibonacci numbers). So, the first reason is that the normal way of calculating fibonacci numbers is way faster to type without making any errors, and less code. Plus, it will be faster than the fancy method for small numbers.
When it comes to big numbers, Fast Matrix multiplication is O(log(n)) if you don't care about precision. However, in competitive programming we almost always care about precision and want the correct answer. To do that, we would need to increase the precision of our numbers. And the bigger n gets, the more precision is required. I don't know the exact formula, but I would imagine that because of the increased precision required, the matrix multiplication which requires only O(log n) multiplications will require something like O(log n) bits of precision so the time complexity will actually end up being somewhat bad (O(log^3 n) maybe?). Not to mention, even harder to code and very slow because you are multiplying arbitrary-precision numbers.
To complete task: find gcd(a,b) for integers a>b>0
Consider an algorithm that checks all of the numbers up to b and keeps track of the max number that divides a and b. It would use the % operator twice per check (for a and b). What would the complexity of this algorithm be?
I have not yet taken any formal CS courses in complexity theory (I will soon) so I am just looking for a quick answer.
The modulo operation is implemented in hardware, and it's pseudo O(1). Strictly speaking, it is not constant, but it depends on the number of bits of a and b. However, even then the number of bits is the same for all input sizes, so we usually ignore this factor.
The worst-case complexity of brute force GCD is just O(n) (also O(a), O(b), or O(min(a,b)); they're all the same), and it happens when when the GCD is either 1, a, or b.
The article Computational complexity of mathematical operations mentions that the complexity of division in O(M(n)), and that "M(n) below stands in for the complexity of the chosen multiplication algorithm".
But I'm not sure how to read that M(n) embedded in O(M(n)): does it mean that the division has the same complexity as multiplication?
If I use, say, Karatsuba multiplication algorithm, will the division also take O(n^1.585)?
does it mean that the division has the same complexity as multiplication?
Formally, it means division cannot have a complexity worse than multiplication. But in practice, the notation is often use to say they have the same complexity.
If I use, say, Karatsuba multiplication algorithm, will the division also take O(n^1.585)?
According to the statement, yes.
However, I am not sure that statement is correct. Indeed, when looking at the Newton–Raphson method, I see it is an iterative process, which has to be repeated a certain number of times to be exact, in the order of log(n) (see the discussion about S here).
In that case, the complexity would be O(log(n)M(n)).
However, if it is not a problem for you to have only a fixed precision (ie. the number of correct digits) whatever the size of the operands are, you can set a constant number of iterations, resulting in a O(M(n)) complexity.
I'v got some problem to understand the difference between Logarithmic(Lcc) and Uniform(Ucc) cost criteria and also how to use it in calculations.
Could someone please explain the difference between the two and perhaps show how to calculate the complexity for a problem like A+B*C
(Yes this is part of an assignment =) )
Thx for any help!
/Marthin
Uniform Cost Criteria assigns a constant cost to every machine operation regardless of the number of bits involved WHILE Logarithm Cost Criteria assigns a cost to every machine operation proportional to the number of bits involved
Problem size influence complexity
Since complexity depends on the size of the
problem we define complexity to be a function
of problem size
Definition: Let T(n) denote the complexity for
an algorithm that is applied to a problem of
size n.
The size (n) of a problem instance (I) is the
number of (binary) bits used to represent the
instance. So problem size is the length of the
binary description of the instance.
This is called Logarithmic cost criteria
Unit Cost Criteria
If you assume that:
- every computer instruction takes one time
unit,
- every register is one storage unit
- and that a number always fits in a register
then you can use the number of inputs as
problem size since the length of input (in bits)
will be a constant times the number of inputs.
Uniform cost criteria assume that every instruction takes a single unit of time and that every register requires a single unit of space.
Logarithmic cost criteria assume that every instruction takes a logarithmic number of time units (with respect to the length of the operands) and that every register requires a logarithmic number of units of space.
In simpler terms, what this means is that uniform cost criteria count the number of operations, and logarithmic cost criteria count the number of bit operations.
For example, suppose we have an 8-bit adder.
If we're using uniform cost criteria to analyze the run-time of the adder, we would say that addition takes a single time unit; i.e., T(N)=1.
If we're using logarithmic cost criteria to analyze the run-time of the adder, we would say that addition takes lgn time units; i.e., T(N)=lgn, where n is the worst case number you would have to add in terms of time complexity (in this example, n would be 256). Thus, T(N)=8.
More specifically, say we're adding 256 to 32. To perform the addition, we have to add the binary bits together in the 1s column, the 2s column, the 4s column, etc (columns meaning the bit locations). The number 256 requires 8 bits. This is where logarithms come into our analysis. lg256=8. So to add the two numbers, we have to perform addition on 8 columns. Logarithmic cost criteria say that each of these 8 addition calculations takes a single unit of time. Uniform cost criteria say that the entire set of 8 addition calculations takes a single unit of time.
Similar analysis can be made in terms of space as well. Registers either take up a constant amount of space (under uniform cost criteria) or a logarithmic amount of space (under uniform cost criteria).
I think you should do some research on Big O notation... http://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions
If there is a part of the description you find difficult edit your question.