Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I got a question
What would be the time complexity of this function?
Function (int n) {
for (i = 1 to n):
print("hello")
}
apparently it's exponential because of binary numbers or something??
it should be O(n) right?
This is clearly O(n). The function prints "hello" n times. So the time-complexity is O(n) and it is not exponential. It is linear.
Since for loop is running from 1 to n, therefore complexity will be O(n). It is linear.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Is there any tool in python that can help me do it. R seems to have so many packages that seem to be accomplishing this.
Use ewm.cov in pandas. You can specify the smoothing factor in terms of halflife, span, or center of mass.
In pandas 0.19, the result is a Panel. In pandas 0.20, you'll get a MultiIndex DataFrame because Panel is deprecated.
df = pd.DataFrame(np.random.randn(1000, 3))
covs = df.ewm(span=60).cov()
covs[3] # covariance matrix as of period 4; could be DatetimeIndex
Out[7]:
0 1 2
0 0.48489 0.12341 -0.41335
1 0.12341 0.59947 -0.18762
2 -0.41335 -0.18762 0.67513
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
There is a elliptic curve with parameters:
a = 0xb3b04200486514cb8fdcf3037397558a8717c85acf19bac71ce72698a23f635
b = 0x12f55f6e7419e26d728c429a2b206a2645a7a56a31dbd5bfb66864425c8a2320
Also the prime number is:
q = 0x247ce416cf31bae96a1c548ef57b012a645b8bff68d3979e26aa54fc49a2c297
How can I solve the equation P * 65537 = H and obtain the value of P?
P and H are points and H equals to (72782057986002698850567456295979356220866771008308693184283729159903205979695, 7766776325114464021923523189912759786515131109431296018171065280757067869793).
Note that in the equation we have Elliptic curve point multiplication!
You need to know the number of points on the curve to solve this. Let's call that number n. Then you will have to compute the inverse of 65537 modulo n and do a scalar multiply of your point H by that number.
This question already has answers here:
How to normalize a NumPy array to a unit vector?
(15 answers)
Closed 6 years ago.
I'm not sure how to say unitfy for a vecor.
What I say is, for vector (4,3) -> (4/5,3/5). Just to divide the vector by its length.
I can to this as vv = v / np.linalg.norm(v)
What is the right word for unitfy and the standard way of doing it?
The word is "normalize":
http://mathworld.wolfram.com/NormalizedVector.html
Dividing by the norm is a pretty standard way of doing this. Watch for the case when the norm is very close to zero (may want to compate it with epsilon and handle that case specially, or throw an exception).
See also:
how to normalize array numpy?
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'll formulate a simple problem that I'd like to solve with machine learning (in R or similar platforms): my algorithm takes 3 parameters (a,b,c), and returns a score s in range [0,1]. The parameters are all categorical: a has 3 options, b has 4, and c has 10.
Therefore my dataset has 3 * 4 * 10 = 120 cases.
High scores are desirable (close to 1), low scores are not (close to 0).
Let's treat the algorihm as a black box, taking a,b,c and returning a s.
The dataset looks like this:
a, b, c, s
------------------
a1, b1, c1, 0.223
a1, b1, c2, 0.454
...
If I plot the density of the s for each parameter, I get very wide distributions, in which some cases perform very well (s > .8 ), others badly (s < .2 ).
If I look at the cases where s is very high, I can't see any clear pattern.
Parameter values that overall perform badly can perform very well in combination with specific parameters, and vice versa.
To measure how well a specific value performs (e.g. a1), I compute the median:
median( mydataset[ a == a1]$s )
For example, median(a1)=.5, median(b3)=.9, but when I combine them, I get a lower result s(a_1,b_3)= .3.
On the other hand, median(a2)=.3, median(b1)=.4, but s(a2,b1)= .7.
Given that there aren't parameter values that perform always well, I guess I should look for combinations (of 2 parameters) that seem to perform well together, in a statistically significant way (i.e. excluding outliers that happen to have very high scores).
In other words, I want to obtain a policy to make the optimal parameter choice, e.g. the best performing combinations are (a1,b3), (a2,b1), etc.
Now, I guess that this is an optimisation problem that can be solved using machine learning.
What standard techniques would you recommend in this context?
EDIT: somebody suggested a linear programming solution with glpk, but I don't understand how to apply linear programming to this problem.
The most standard technique for this question is Linear Regression. You may predict the value for specific parameters; in more general - to get the function that on your 3 parameters gives you maximum value