How can I solve a log2N question where input size is doubled? - time-complexity

A method is O(Nlog2N). It takes .001 seconds for the method to run when N = 1,000.
What is the expected time for the method to run when N = 2,000?
I know the answer to the question is approximately 0.0022, but I don't know how to reach that answer. Can someone help with this?

Since C(1000)=.001 and C(N) ~ k Nlog2N => k ~ .001 / (1000 * 10) ~ 1e-7
So C(2000) ~ 1e-7 * (2000 * 11) ~ .0022

Related

Split values in Millions,Thousands,Units in spark sql

How to get a value split in Billions,millions,thousands,Units and decimals in spark sql
At the moment I tried the below options
Thousands - ABS(Amount)%1000000 Output : 562191.7974663
Units - ABS(Amount)%1000 Output : 191.7974663
Decimals - ABS(Amount)%1 Output : 0.7974663 or -0.56724
However, I am looking for an output as per below table.
Amount Amt_Billions Amt_Millions Amt_Thousands Amt_Units Amt_Decimals
3946562191.7974663 3000000000 946000000 562000 191 7974663
-743245613.56724 743000000 245000 613 56724
Please can anyone help me with this calculation.
Thanks in advance.
you can get the orders of magnitude like so.
floor(abs(Amount)%1000000000000/1000000000)*1000000000 Billions,
floor(abs(Amount)%1000000000/1000000)*1000000 Millions,
floor(abs(Amount)%1000000/1000)*1000 Thousands,
floor(abs(Amount)%1000) Units
Converting the decimal portion I think you will have to resort to a string expression, ie cast to string, find the index of the "." then take the right(string,len(string)-index), finally cast to int, unless it's just for display purposes of course.
Hmmm . . .
floor(amount / 1000000000) * 1000000000 as billions,
floor((amount % 1000000000) / 1000000) * 1000000 as millions,
floor((amount % 1000000) / 1000) * 1000) as thousands,
floor(amount % 1000) as units
The decimals are the only tricky ones. The numbers don't make sense, because 0.1, 0.01, 0.001 would all resolve to 1. I would suggest doing this in some units, such as millions:
floor((amount % 1) * 1000000) as units

How to find branching given time and depth for iterative deepening?

My professor posed the following question and I really don't know how to start solving this problem. Any help is really welcomed.
Let the space of the tree be a tree with a uniform branching b (each node has exactly b children). We are exploring the space with iterative deepening, starting with the root of the tree. The program finds the first solution at a depth of 3, in 0.2 seconds, and the next solution at a depth of 5 in 10 seconds. We know that the third solution is at depth 9. Estimate approximately how much time we can expect the program to need in order to find the third solution.
Remember school math and sum of geometric progression.
Tree looks like (example for b=3 children)
N
N N N
N N N N N N N N N
Number of nodes at K top levels is (1 + b + b^2 + b^3... + b^(k-1))
S(k) = (b^k - 1) / (b - 1)
We can see for k=3 and k=5
S(5) / S(3) = 10 / 0.2
(b^5 - 1) / (b^3 - 1) = 10 / 0.2 = 50
Approximation (neglecting -1 term for not so small powers)
b^5 / b^3 = b^2 ~ 50
To find result for k=9
b^9 / b^5 = b^4 ~ 2500
So time is 10*2500 = 25000 seconds ~ 7 hours

time compexity of my algorithm

I'm having trouble with finding the O-time of some algoritms. I've searched quite some O notations but whenever my excercise gets harder, I can't find the solution. Now I came across an algoritm and I can't really find a solution.
I've searched through Stack Overflow but found nothing to really help me. The best post I found was this.
It only said what I knew and not really how to calculate it or see it. Also the second post did say some algoritms with solutions, but not how to find it.
The Code
`for i = 1; i <= n; i++
for j = 1; j <= i; j++
for k = 1; k <= i; k++
x = x + 1
`
Question
What is the time complexity of this algorithm?
Also, are there some good tutorials to help me understand this matter better?
Also sorry if there's a stack overflow post of this allready but I couldn't find a good one to help me.
The loop defining i runs n times.
The loop defining j runs n * n/2 times
The loop defining k runs n * n/2 * n/2 times
= n * 1/2 * n * 1/2 * n
= n * n * n * 1/2 * 1/2
= O(n^3)
You can also try to infer that from the final value of the variable x, which should be roughly proportional to n^3

How to calculate the time complexity?

Assume that function f is in the complexity class O(N (log N)2), and that for N = 1,000 the program runs in 8 seconds.
How to write a formula T(N) that can compute the approximate time that it takes to run f for any input of size N???
Here is the answer:
8 = c (1000 x 10)
c = 8x10^-4
T(N) = 8x10-4* (N log2 N)
I don't understand the first line where does the 10 come from?
Can anybody explain the answer to me please? Thanks!
I don't understand the first line where does the 10 come from? Can
anybody explain the answer to me please? Thanks!
T(N) is the maximum time complexity. c is the constant or O(1) time, which is the portion of the algorithm's speed which is not affected by the size of the input. The 10 comes from rounding to simplify the math. It's actually 9.965784, which is log2 of 1000, e.g.
N x log2 N is
1000 x 10 or
1000 x 9.965784
O(N (log N)^2) describes how the runtime scales with N, but it's not a formula for calculating runtime in seconds. In fact, Big-O notation doesn't generally give the exact scaling function itself, but an upper bound on it as N becomes large. See here (there's a nice picture showing this last point).
If you're interested in a function's runtime in practice (particularly in the non-asymptotic regime, i.e. small N), one option is to actually run the function and measure it. Do this for multiple values of N, chosen on some grid (possibly with nonlinear spacing). Then, you can interpolate between these points.
Define S(N)=N(log N)^2
If you can assume that S(N) bounds your program for all N >= 1000
Then you can bound your execution time by good'ol rule of three:
S(1000) - T(1000)
S(N) - T(N)
T(N) <= S(N)* T(1000)/S(1000) for all N >=1000
S(1000) approx 10E4
T(1000) = 8
T(N) <= N(log N)^2 * 8 / 10E4

Asymptotic analysis question: sum[log(i)*i^3, {i, n}] is big-theta (log(n)*n^4)

I've got a homework question that's been puzzling me. It asks that you prove that the function Sum[log(i)*i^3, {i, n}) (ie. the sum of log(i)*i^3 from i=1 to n) is big-theta (log(n)*n^4).
I know that Sum[i^3, {i, n}] is ( (n(n+1))/2 )^2 and that Sum[log(i), {i, n}) is log(n!), but I'm not sure if 1) I can treat these two separately since they're part of the same product inside the sum, and 2) how to start getting this into a form that will help me with the proof.
Any help would be really appreciated. Thanks!
The series looks like this - log 1 + log 2 * 2^3 + log 3 * 3^3....(upto n terms)
the sum of which does not converge. So if we integrate it
Integral to (1 to infinity) [ logn * n^3] (integration by parts)
you will get 1/4*logn * n^4 - 1/16* (n^4)
It is clear that the dominating term there is logn*n^4, therefore it belongs to Big Theta(log n * n^4)
The other way you could look at it is -
The series looks like log 1 + log2 * 8 + log 3 * 27......+ log n * n^3.
You could think of log n as the term with the highest value, since all logarithmic functions grow at the same rate asymptotically,
You could treat the above series as log n (1 + 2^3 + 3^3...) which is
log n [n^2 ( n + 1)^2]/4
Assuming f(n) = log n * n^4
g(n) = log n [n^2 ( n + 1)^2]/4
You could show that lim (n tends to inf) for f(n)/g(n) will be a constant [applying L'Hopital's rule]
That's another way to prove that the function g(n) belongs to Big Theta (f(n)).
Hope that helps.
Hint for one part of your solution: how large is the sum of the last two summands of your left sum?
Hint for the second part: If you divide your left side (the sum) by the right side, how many summands to you get? How large is the largest one?
Hint for the first part again: Find a simple lower estimate for the sum from n/2 to n in your first expression.
Try BigO limit definition and use calculus.
For calculus you might like to use some Computer Algebra System.
In following answer, I've shown, how to do this with Maxima Opensource CAS :
Asymptotic Complexity of Logarithms and Powers