Order of growth for given functions [closed] - time-complexity

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I've tried to sort these functions in asymptotic growth order and would like to know if I'm on the right track.
5000log2(n)
sqrt(n) +7
8n
n/log2(n)
4nlog2(n)
n^1/100
1/4 n^2 - 10000n
.

You can test if f(n) is asymptotically larger than g(n) by checking if
lim f(n) / g(n) = ∞
n->∞
If the limit is a non-zero constant, f(n) and g(n) are asymptotically equal. If it is zero, f(n) is asymptotically smaller than g(n).
So. The major part of your list looks correct. There are a few mistakes, though.
n/log2(n) should be between sqrt(n) + 7 and 8n.
n^(1/100) is the 100-th root of n and should be before the square-root.

The above list would be -
1) 5000log2(n)
2) n^(1/100)
3)sqrt(n)+7
4)n/log2(n)
5)8n
6)4nlog2(n)
7)1/4n^2-10000n
as per my knowledge.
For more information on the topic you can see the definition of O(n),Big-theta n and Omega - n
Correction to the above list are most welcomed

Related

Time complexity apparently exponential [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I got a question
What would be the time complexity of this function?
Function (int n) {
for (i = 1 to n):
print("hello")
}
apparently it's exponential because of binary numbers or something??
it should be O(n) right?
This is clearly O(n). The function prints "hello" n times. So the time-complexity is O(n) and it is not exponential. It is linear.
Since for loop is running from 1 to n, therefore complexity will be O(n). It is linear.

Example of Polynomial time algorithm [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What is the example of polynomial time algorithm
Is polynomial time algorithm fastest?
Suppose 100 elements in array , then how can I decide algorithm is polynomial time?
Q: What is the example of polynomial time algorithm?
for (i = 0; i < n; ++i)
printf("%d", i);
This is a linear algorithm, and linear belongs to polynomial class.
Q: Is polynomial time algorithm fastest?
No, logarithmic and constant-time algorithms are asymptotically faster than polynomial algorithms.
Q: Suppose 100 elements in array , then how can I decide algorithm is
polynomial time?
You haven't specified any algorithm here, just the data structure (array with 100 elements). However, to determine whether algorithm is polynomial time or not, you should find big-o for that algorithm. If it is O(n^k), then it is polynomial time. Read more here and here.

All versions of differential evolution algorithm [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
explain all updates in the basic algorithm of differential evolution. i am not able to find all versions of this algorithm. explain all versions of this algorithm as a survey and i am not clearly understand the theory behind this algorithm as given in the Wikipedia. Wikipedia also define only basic algorithm of differential evolution but i want to all updates of this algorithm
For complete survey in Differential Evolution, I suggest you the paper entitled Differential Evolution: A Survey of the State-of-the-Art but the brief explanation is :
DE has 2 basic crossover and 5 basic mutation operators, so we have 2*5=10 basic DE variants.
Two crossover operators are Exponential and Binomial.
Exponential Crossover:
D is problem space dimensionality, n is randomly chosen from [1,D], Cr is crossover rate and L is drawn from [1,D] according to above pseudocode.
Binomial Crossover:
j is refer to j-th dimension, i is vector number and G is generation number and jrand is randomly chosen index from [1,D].
Five mutation operators are DE/rand/1 , DE/best/1 , DE/target-to-best/1 , DE/best/2 and DE/rand/2.
DE/rand/1: V(i)=X(r1)+F*(X(r2)-X(r3))
DE/best/1: V(i)=X(best)+F*(X(r1)-X(r2))
DE/target-to-best/1: V(i)=X(i)+F*(X(best)-X(i))+F*(X(r1)-X(r2))
DE/best/2: V(i)=X(best)+F*(X(r1)-X(r2))+F*(X(r3)-X(r4))
DE/rand/2: V(i)=X(r1)+F*(X(r2)-X(r3))+F*(x(r4)-X(r5))
V(i) is donor(mutant) vector for target vector X(i), F is difference vector's scale factor, r1,r2,r3,r4,r5 are mutually exclusive, randomly chosen from [1,NP] and differ from i, best is the fittest vector's index in the current population, finally NP is population size.
These are all of things you can know about basic variants of DE.
DE also has many variants for many purposes which has explained in the mentioned paper.

Can someone explain why f(n) + o(f(n)) = theta(f(n))? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
According to this page:
The statement: f(n) + o(f(n)) = theta(f(n)) appears to be true.
Where: o = little-O, theta = big theta
This does not make intuitive sense to me. We know that o(f(n)) grows asymptotically faster than f(n). How, then could it be upper bounded by f(n) as is implied by big theta?
Here is a counter-example:
let f(n) = n, o(f(n)) = n^2.
n + n^2 is NOT in theta(n)
It seems to me that the answer in the previously linked stackexchange answer is wrong. Specifically, the statement below seems as if the poster is confusing little-o with little-omega.
Since g(n) is o(f(n)), we know that for each ϵ>0 there is an nϵ such that |g(n)|<ϵ|f(n)| whenever n≥nϵ
Update: I've realized the answer to my question
I was confused as to what o(f(n)) was. I thought that o(f(n)) for f(n)=n was, for instance, f(n) = n^2.
This is not correct. o(f(n)) is a function which is upper bounded by f and not asymptotically tight with f.
For instance, if f(n)=n, then f(n)=1 might be a member of o(f(n)), but f(n)=n^2 is NOT a member of o(f(n)).

What machine learning algorithm for this simple optimisation? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'll formulate a simple problem that I'd like to solve with machine learning (in R or similar platforms): my algorithm takes 3 parameters (a,b,c), and returns a score s in range [0,1]. The parameters are all categorical: a has 3 options, b has 4, and c has 10.
Therefore my dataset has 3 * 4 * 10 = 120 cases.
High scores are desirable (close to 1), low scores are not (close to 0).
Let's treat the algorihm as a black box, taking a,b,c and returning a s.
The dataset looks like this:
a, b, c, s
------------------
a1, b1, c1, 0.223
a1, b1, c2, 0.454
...
If I plot the density of the s for each parameter, I get very wide distributions, in which some cases perform very well (s > .8 ), others badly (s < .2 ).
If I look at the cases where s is very high, I can't see any clear pattern.
Parameter values that overall perform badly can perform very well in combination with specific parameters, and vice versa.
To measure how well a specific value performs (e.g. a1), I compute the median:
median( mydataset[ a == a1]$s )
For example, median(a1)=.5, median(b3)=.9, but when I combine them, I get a lower result s(a_1,b_3)= .3.
On the other hand, median(a2)=.3, median(b1)=.4, but s(a2,b1)= .7.
Given that there aren't parameter values that perform always well, I guess I should look for combinations (of 2 parameters) that seem to perform well together, in a statistically significant way (i.e. excluding outliers that happen to have very high scores).
In other words, I want to obtain a policy to make the optimal parameter choice, e.g. the best performing combinations are (a1,b3), (a2,b1), etc.
Now, I guess that this is an optimisation problem that can be solved using machine learning.
What standard techniques would you recommend in this context?
EDIT: somebody suggested a linear programming solution with glpk, but I don't understand how to apply linear programming to this problem.
The most standard technique for this question is Linear Regression. You may predict the value for specific parameters; in more general - to get the function that on your 3 parameters gives you maximum value