Time complexity apparently exponential [closed] - time-complexity

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I got a question
What would be the time complexity of this function?
Function (int n) {
for (i = 1 to n):
print("hello")
}
apparently it's exponential because of binary numbers or something??
it should be O(n) right?

This is clearly O(n). The function prints "hello" n times. So the time-complexity is O(n) and it is not exponential. It is linear.

Since for loop is running from 1 to n, therefore complexity will be O(n). It is linear.

Related

sin function in numpy have value greater than 1 [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
In numpy np.sin() function is used to generate sine function, it generates values greater than 1. But the sine function should generate output in the range (-1 to +1).
>>> np.sin(np.pi/2)
1.0
>>> np.pi
3.141592653589793
>>> np.pi/2
1.5707963267948966
>>> np.sin(1.57)
0.9999996829318346
>>> np.sin(2*np.pi)
-2.4492935982947064e-16
>>> np.sin(np.pi)
1.22464679914735
You've mis-copied the last line. The correct output is
>>> np.sin(np.pi)
1.2246467991473532e-16
That's 1.22e-16, so approximately, well, 0.
I assume you mean something like
>>> np.sin(np.pi)
1.2246467991473532e-16
The output is not greater than 1. In fact, it is very small. The e-16 represents x10^-16 (ten to the power of minus sixteen). This is a common notation, see: https://en.wikipedia.org/wiki/Scientific_notation#E_notation

Example of Polynomial time algorithm [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What is the example of polynomial time algorithm
Is polynomial time algorithm fastest?
Suppose 100 elements in array , then how can I decide algorithm is polynomial time?
Q: What is the example of polynomial time algorithm?
for (i = 0; i < n; ++i)
printf("%d", i);
This is a linear algorithm, and linear belongs to polynomial class.
Q: Is polynomial time algorithm fastest?
No, logarithmic and constant-time algorithms are asymptotically faster than polynomial algorithms.
Q: Suppose 100 elements in array , then how can I decide algorithm is
polynomial time?
You haven't specified any algorithm here, just the data structure (array with 100 elements). However, to determine whether algorithm is polynomial time or not, you should find big-o for that algorithm. If it is O(n^k), then it is polynomial time. Read more here and here.

Dividing a point by a specific number in elliptic curve [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
There is a elliptic curve with parameters:
a = 0xb3b04200486514cb8fdcf3037397558a8717c85acf19bac71ce72698a23f635
b = 0x12f55f6e7419e26d728c429a2b206a2645a7a56a31dbd5bfb66864425c8a2320
Also the prime number is:
q = 0x247ce416cf31bae96a1c548ef57b012a645b8bff68d3979e26aa54fc49a2c297
How can I solve the equation P * 65537 = H and obtain the value of P?
P and H are points and H equals to (72782057986002698850567456295979356220866771008308693184283729159903205979695, 7766776325114464021923523189912759786515131109431296018171065280757067869793).
Note that in the equation we have Elliptic curve point multiplication!
You need to know the number of points on the curve to solve this. Let's call that number n. Then you will have to compute the inverse of 65537 modulo n and do a scalar multiply of your point H by that number.

Order of growth for given functions [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I've tried to sort these functions in asymptotic growth order and would like to know if I'm on the right track.
5000log2(n)
sqrt(n) +7
8n
n/log2(n)
4nlog2(n)
n^1/100
1/4 n^2 - 10000n
.
You can test if f(n) is asymptotically larger than g(n) by checking if
lim f(n) / g(n) = ∞
n->∞
If the limit is a non-zero constant, f(n) and g(n) are asymptotically equal. If it is zero, f(n) is asymptotically smaller than g(n).
So. The major part of your list looks correct. There are a few mistakes, though.
n/log2(n) should be between sqrt(n) + 7 and 8n.
n^(1/100) is the 100-th root of n and should be before the square-root.
The above list would be -
1) 5000log2(n)
2) n^(1/100)
3)sqrt(n)+7
4)n/log2(n)
5)8n
6)4nlog2(n)
7)1/4n^2-10000n
as per my knowledge.
For more information on the topic you can see the definition of O(n),Big-theta n and Omega - n
Correction to the above list are most welcomed

Can someone explain why f(n) + o(f(n)) = theta(f(n))? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
According to this page:
The statement: f(n) + o(f(n)) = theta(f(n)) appears to be true.
Where: o = little-O, theta = big theta
This does not make intuitive sense to me. We know that o(f(n)) grows asymptotically faster than f(n). How, then could it be upper bounded by f(n) as is implied by big theta?
Here is a counter-example:
let f(n) = n, o(f(n)) = n^2.
n + n^2 is NOT in theta(n)
It seems to me that the answer in the previously linked stackexchange answer is wrong. Specifically, the statement below seems as if the poster is confusing little-o with little-omega.
Since g(n) is o(f(n)), we know that for each ϵ>0 there is an nϵ such that |g(n)|<ϵ|f(n)| whenever n≥nϵ
Update: I've realized the answer to my question
I was confused as to what o(f(n)) was. I thought that o(f(n)) for f(n)=n was, for instance, f(n) = n^2.
This is not correct. o(f(n)) is a function which is upper bounded by f and not asymptotically tight with f.
For instance, if f(n)=n, then f(n)=1 might be a member of o(f(n)), but f(n)=n^2 is NOT a member of o(f(n)).