What is the complexity of the CUBE operator in SQL? - sql

What is the complexity (Big O notation) of a CUBE operation in SQL (Microsoft) or Oracle?
e.g.
SELECT x1, x2, SUM(x3)
FROM xyz
GROUP BY CUBE (x1, x2)

The complexity is:
2^c * n log(n)
where:
c = number of columns in the cube
n = number of rows in the table
The 2^c is for all combinations of the columns. n log(n) is for the aggregation operator -- which is generally equivalent to a sort in the absence of an index.
Because c is never really that big -- for instance, 10 would generate a lot of rows -- we could treat it as a constant (in that case 1,000,000) and say the operation is essentially n log(n).

Related

BIG(O) time complexity

What is the time Complexity for below code:
1)
function(values,xlist,ylist):
sum =0
n=0
for r from 0 to xlist:
for c from 0 to ylist:
sum+= values[r][c]
n+1
return sum/n
2)
function PrintCharacters():
characters= {"a","b","c","d"}
foreach character in characters
print(character)
According to me the 1st code has O(xlist*ylist) complexity and 2nd code has O(n).
Is this right?
Big O notation to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines
For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by
T(n) = 4 n^2 - 2 n + 2
If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say "T(n)" grows at the order of n^2 " and write:T(n) = O(n^2)
For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write
f(x) = O(g(x))
(or f(x) = O(g(x)) for x -> infinity to be more precise) if and only if there exist constants N and C such that
|f(x)| <= C|g(x)| for all x>N
Intuitively, this means that f does not grow faster than g
If a is some real number, we write
f(x) = O(g(x)) for x->a
if and only if there exist constants d > 0 and C such that
|f(x)| <= C|g(x)| for all x with |x-a| < d
So for your case it would be
O(n) as |f(x)| > C|g(x)|
Reference from http://web.mit.edu/16.070/www/lecture/big_o.pdf
for r from 0 to xlist: // --> n time
for c from 0 to ylist: // n time
sum+= values[r][c]
n+1
}
function PrintCharacters():
characters= {"a","b","c","d"}
foreach character in characters --> # This loop will run as many time as there are characters suppose n characters than it will run time so O(n)
print(character)
Big O Notation gives an assumption when value is very big outer loop
will run n times and inner loop is running n times
Assume n -> 100 than total n^2 10000 run times

Big O notation and measuring time according to it

Suppose we have an algorithm that is of order O(2^n). Furthermore, suppose we multiplied the input size n by 2 so now we have an input of size 2n. How is the time affected? Do we look at the problem as if the original time was 2^n and now it became 2^(2n) so the answer would be that the new time is the power of 2 of the previous time?
Big 0 is not for telling you the actual running time, just how the running time is affected by the size of input. If you double the size of input the complexity is still O(2^n), n is just bigger.
number of elements(n) units of work
1 1
2 4
3 8
4 16
5 32
... ...
10 1024
20 1048576
There's a misunderstanding here about how Big-O relates to execution time.
Consider the following formulas which define execution time:
f1(n) = 2^n + 5000n^2 + 12300
f2(n) = (500 * 2^n) + 6
f3(n) = 500n^2 + 25000n + 456000
f4(n) = 400000000
Each of these functions are O(2^n); that is, they can each be shown to be less than M * 2^n for an arbitrary M and starting n0 value. But obviously, the change in execution time you notice for doubling the size from n1 to 2 * n1 will vary wildly between them (not at all in the case of f4(n)). You cannot use Big-O analysis to determine effects on execution time. It only defines an upper boundary on the execution time (which is not even guaranteed to be the minimum form of the upper bound).
Some related academia below:
There are three notable bounding functions in this category:
O(f(n)): Big-O - This defines a upper-bound.
Ω(f(n)): Big-Omega - This defines a lower-bound.
Θ(f(n)): Big-Theta - This defines a tight-bound.
A given time function f(n) is Θ(g(n)) only if it is also Ω(g(n)) and O(g(n)) (that is, both upper and lower bounded).
You are dealing with Big-O, which is the usual "entry point" to the discussion; we will neglect the other two entirely.
Consider the definition from Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes:
f(x)=O(g(x)) as x tends to infinity
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M|g(x)| for all x > x0
Going from here, assume we have f1(n) = 2^n. If we were to compare that to f2(n) = 2^(2n) = 4^n, how would f1(n) and f2(n) relate to each other in Big-O terms?
Is 2^n <= M * 4^n for some arbitrary M and n0 value? Of course! Using M = 1 and n0 = 1, it is true. Thus, 2^n is upper-bounded by O(4^n).
Is 4^n <= M * 2^n for some arbitrary M and n0 value? This is where you run into problems... for no constant value of M can you make 2^n grow faster than 4^n as n gets arbitrarily large. Thus, 4^n is not upper-bounded by O(2^n).
See comments for further explanations, but indeed, this is just an example I came up with to help you grasp Big-O concept. That is not the actual algorithmic meaning.
Suppose you have an array, arr = [1, 2, 3, 4, 5].
An example of a O(1) operation would be directly access an index, such as arr[0] or arr[2].
An example of a O(n) operation would be a loop that could iterate through all your array, such as for elem in arr:.
n would be the size of your array. If your array is twice as big as the original array, n would also be twice as big. That's how variables work.
See Big-O Cheat Sheet for complementary informations.

Is O(mn) in O(n^2)?

Simple question. Working with an m x n matrix and I'm doing some O(mn) operations. My question is if O(mn) is in O(n^2). Looking at the Wikipedia on big O I would think so but I've always been pretty bad at complexity bounds so I was hoping someone could clarify.
O(mn) for a m x n matrix means that you're doing constant work for each value of the matrix.
O(n^2) means that, for each column, you're doing work that is O(# columns). Note this runtime increases trivially with # of rows.
So, in the end, it's a matter of if m is greater than n. if m >> n, O(n^2) is faster. if m << n, O(mn) is faster.
m * n is O(n2) if m is O(n).
I assume that for matrix you probably will have m = O(n) which is the number of columns while n is a number of rows. So m * n = O(n2). But who knows how many columns your matrix will have.
It all depends on what bounds does m have.
Have a look at definition of O(n).

Fibonacci Sequence - Time Complexity

Given that fib(n)=fib(n-1)+fib(n-2) for n>1 and given that fib(0)=a, fib(1)=b (some a, b >0), which of the following is true?
fib(n) is
Select one or more:
a. O(n^2)
b. O(2^n)
c. O((1-sqrt 5)/2)^n)
d. O(n)
e. Answer depends on a and b.
f. O((1+sqrt 5)/2)^n)
Solving the Fibonacci sequence I got that:
fib(n)= 1/(sqrt 5) ((1+sqrt 5)/2)^n - 1/(sqrt 5) ((1-sqrt 5)/2)^n
But what would be the time complexity in this case? Would that mean the answers are c and f?
From your closed form of your formula, the term 1 / (sqrt 5) ((1 - sqrt 5) / 2)^n has limit 0 as n grows to infinity (|(1 - sqrt 5) / 2| < 1). Therefore we can ignore this term. Also since in time complexity theory we don't care about muliplication constants the following is true:
fib(n) = Θ(φ^n)
where φ = (1 + sqrt 5) / 2 a.k.a. the golden ratio constant.
So it's an exponential function and we can exclude a, d, e. We can exclude c since as was said it has limit 0. But answer b is also correct because φ < 2 and O expresses an upper bound.
Finally, the correct answers are:
b, f
Θ(φ^n) is correct when a=1 and b=1 or a=1 and b=2 . The value of φ depends on a and b.
For computing fib(n-1) and fib(n-2) if we compute them recursively complexity is exponential, but if we save two last values and use them, complexity is O(n) and not depends on a and b.

relational algebra natural join

Hi all I have an exam coming up and am not getting much help from the lecturer on two questions on the practice exam. She has provided the answer but has not responded to my questions about the answer, I'm hoping someone here would be able to explain why the answer is the way it is.
Consider the following two tables R and S with their instances:
R S
A B C D E
a x y x y
a z w z w
b x k
b m j
c x y
f g h
a) πA(R[natural join]B=D S)
the answer being (a,b,c), why isn't it (a,a,b,c)? does a projection make it distinct?
b) π A(R[natural join] B<>D S)
the answer being (a,b,c,f), why is a an answer? b=d both times when values are x and z, so why is this being printed out?
a)In Relation Algebra, the projection operator provides duplicate elimination. In SQL this is not the default operation, but it is for relational algebra. Here is my source. At the moment, I can't recall why it does duplicate elimination, but this was my professor for databases and he is very knowledgeable. (I think it's because Relation Algebra uses set-logic and sets do not have duplicates.)
b)The joining of 2 tables creates a CROSS PRODUCT between the 2 tables. You have 6 rows and 2 rows. So the cross product is 6x2 = 12 rows. For row 1 of table R, you have a x y. This will be paired with x y AND z w resulting in [a x y x y] and [a x y z w]. The second pairing is valid for this relational algebra statement. Columns B and D do not match x != z.
a) πA(R[natural join]B=D S)
the answer being (a,b,c), why isn't it (a,a,b,c)? does a projection make it distinct?
In relational algebra, duplicate tuples are not permitted; that a main difference between sql (where distinct is needed) and relational algebra
b) π A(R[natural join] B<>D S)
the answer being (a,b,c,f), why is a an answer? b=d both times when values are x and z, so why is this being printed out?
Natural join operation returns the set of all combinations of tuples in R and S, so in this case returns also tuples (a x y z w) and (a z w x y); thus a has to be in the resulting projection.
[natural join] B=D
This is not a natural join because "natural join" is a join that joins relations exclusively over attributes of the same name. The construct you describe might in some places be labeled/termed an "equijoin" or so, but it is certainly noy a "natural join".
[natural join] B<>D
This is not a natural join because "natural join" is a join that joins together tuples of the argument relations if and only if the attribute values are equal.
You are being hopelessly mistaught and miseducated. Reference material : "an introduction to database systems", C.J.Date. It won't do you any good for your exams, but if you seek a later career in database technology it might be worthwhile to remember this.
But to answer your actual questions (in line with preceding answers) :
a) The attribute value 'a' cannot appear twice in the result of a projection, because a projection produces a relation, and a relation is defined to be a set, and sets cannot contain duplicates.
b) The [non-] natural join contains both the tuples (axyzw) and (azwxy). "First" tuple from R with "second" tuple from S, and other way round. The projection includes the result (a).