sin function in numpy have value greater than 1 [closed] - numpy

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
In numpy np.sin() function is used to generate sine function, it generates values greater than 1. But the sine function should generate output in the range (-1 to +1).
>>> np.sin(np.pi/2)
1.0
>>> np.pi
3.141592653589793
>>> np.pi/2
1.5707963267948966
>>> np.sin(1.57)
0.9999996829318346
>>> np.sin(2*np.pi)
-2.4492935982947064e-16
>>> np.sin(np.pi)
1.22464679914735

You've mis-copied the last line. The correct output is
>>> np.sin(np.pi)
1.2246467991473532e-16
That's 1.22e-16, so approximately, well, 0.

I assume you mean something like
>>> np.sin(np.pi)
1.2246467991473532e-16
The output is not greater than 1. In fact, it is very small. The e-16 represents x10^-16 (ten to the power of minus sixteen). This is a common notation, see: https://en.wikipedia.org/wiki/Scientific_notation#E_notation

Related

Time complexity apparently exponential [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I got a question
What would be the time complexity of this function?
Function (int n) {
for (i = 1 to n):
print("hello")
}
apparently it's exponential because of binary numbers or something??
it should be O(n) right?
This is clearly O(n). The function prints "hello" n times. So the time-complexity is O(n) and it is not exponential. It is linear.
Since for loop is running from 1 to n, therefore complexity will be O(n). It is linear.

EWMA covariance matrix using risk metrics methodology [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Is there any tool in python that can help me do it. R seems to have so many packages that seem to be accomplishing this.
Use ewm.cov in pandas. You can specify the smoothing factor in terms of halflife, span, or center of mass.
In pandas 0.19, the result is a Panel. In pandas 0.20, you'll get a MultiIndex DataFrame because Panel is deprecated.
df = pd.DataFrame(np.random.randn(1000, 3))
covs = df.ewm(span=60).cov()
covs[3] # covariance matrix as of period 4; could be DatetimeIndex
Out[7]:
0 1 2
0 0.48489 0.12341 -0.41335
1 0.12341 0.59947 -0.18762
2 -0.41335 -0.18762 0.67513

Dividing a point by a specific number in elliptic curve [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
There is a elliptic curve with parameters:
a = 0xb3b04200486514cb8fdcf3037397558a8717c85acf19bac71ce72698a23f635
b = 0x12f55f6e7419e26d728c429a2b206a2645a7a56a31dbd5bfb66864425c8a2320
Also the prime number is:
q = 0x247ce416cf31bae96a1c548ef57b012a645b8bff68d3979e26aa54fc49a2c297
How can I solve the equation P * 65537 = H and obtain the value of P?
P and H are points and H equals to (72782057986002698850567456295979356220866771008308693184283729159903205979695, 7766776325114464021923523189912759786515131109431296018171065280757067869793).
Note that in the equation we have Elliptic curve point multiplication!
You need to know the number of points on the curve to solve this. Let's call that number n. Then you will have to compute the inverse of 65537 modulo n and do a scalar multiply of your point H by that number.

Numpy: How to unitfy a vector? [duplicate]

This question already has answers here:
How to normalize a NumPy array to a unit vector?
(15 answers)
Closed 6 years ago.
I'm not sure how to say unitfy for a vecor.
What I say is, for vector (4,3) -> (4/5,3/5). Just to divide the vector by its length.
I can to this as vv = v / np.linalg.norm(v)
What is the right word for unitfy and the standard way of doing it?
The word is "normalize":
http://mathworld.wolfram.com/NormalizedVector.html
Dividing by the norm is a pretty standard way of doing this. Watch for the case when the norm is very close to zero (may want to compate it with epsilon and handle that case specially, or throw an exception).
See also:
how to normalize array numpy?

What machine learning algorithm for this simple optimisation? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'll formulate a simple problem that I'd like to solve with machine learning (in R or similar platforms): my algorithm takes 3 parameters (a,b,c), and returns a score s in range [0,1]. The parameters are all categorical: a has 3 options, b has 4, and c has 10.
Therefore my dataset has 3 * 4 * 10 = 120 cases.
High scores are desirable (close to 1), low scores are not (close to 0).
Let's treat the algorihm as a black box, taking a,b,c and returning a s.
The dataset looks like this:
a, b, c, s
------------------
a1, b1, c1, 0.223
a1, b1, c2, 0.454
...
If I plot the density of the s for each parameter, I get very wide distributions, in which some cases perform very well (s > .8 ), others badly (s < .2 ).
If I look at the cases where s is very high, I can't see any clear pattern.
Parameter values that overall perform badly can perform very well in combination with specific parameters, and vice versa.
To measure how well a specific value performs (e.g. a1), I compute the median:
median( mydataset[ a == a1]$s )
For example, median(a1)=.5, median(b3)=.9, but when I combine them, I get a lower result s(a_1,b_3)= .3.
On the other hand, median(a2)=.3, median(b1)=.4, but s(a2,b1)= .7.
Given that there aren't parameter values that perform always well, I guess I should look for combinations (of 2 parameters) that seem to perform well together, in a statistically significant way (i.e. excluding outliers that happen to have very high scores).
In other words, I want to obtain a policy to make the optimal parameter choice, e.g. the best performing combinations are (a1,b3), (a2,b1), etc.
Now, I guess that this is an optimisation problem that can be solved using machine learning.
What standard techniques would you recommend in this context?
EDIT: somebody suggested a linear programming solution with glpk, but I don't understand how to apply linear programming to this problem.
The most standard technique for this question is Linear Regression. You may predict the value for specific parameters; in more general - to get the function that on your 3 parameters gives you maximum value