For
Theta
Rho
Pi
Chi
Iota
Do you know which provide permutation, non-linearity and substitution?
And how could I figure out if a function provides permutations/non-linearity/substitution.
Related
I've come across a from scratch implementation for gaussian processes:
http://krasserm.github.io/2018/03/19/gaussian-processes/
There, the isotropic squared exponential kernel is implemented in numpy. It looks like:
The implementation is:
def kernel(X1, X2, l=1.0, sigma_f=1.0):
sqdist = np.sum(X1**2, 1).reshape(-1, 1) + np.sum(X2**2, 1) - 2 * np.dot(X1, X2.T)
return sigma_f**2 * np.exp(-0.5 / l**2 * sqdist)
consistent with the implementation of Nando de Freitas: https://www.cs.ubc.ca/~nando/540-2013/lectures/gp.py
However, I am not quite sure how this implementation matches the provided formula, especially in the sqdist part. In my opinion, it is wrong but it works (and delivers the same results as scipy's cdist with squared euclidean distance). Why do I think it is wrong? If you multiply out the multiplication of the two matrices, you get
which equals to either a scalar or a nxn matrix for a vector x_i, depending on whether you define x_i to be a column vector or not. The implementation however gives back a nx1 vector with the squared values.
I hope that anyone can shed light on this.
I found out: The implementation is correct. I just was not aware of the fuzzy notation (in my opinion) which is sometimes used in ML contexts. What is to be achieved is a distance matrix and each row vectors of matrix A are to be compared with the row vectors of matrix B to infer the covariance matrix, not (as I somehow guessed) the direct distance between two matrices/vectors.
Is it possible to model a non-linear piece-wise cost function in Cplex?
For example something like the figures I put here:
non linear piece wise Cost function (black line)
I know one way is to linearising the quadratic part to linear one, but, I want to use the quadratic part as it is.
You can see that the condition is on the decision variable itself, the cost function can be formulated as follows:
if x ≲ x0 Then cost is quadratic part;
else cost is linear part.
Thanks in advance :)
One way is to pick the cheapest curve at x:
min cost
cost ≥ f(x) − Mδ
cost ≥ g(x) − M(1−δ)
δ ϵ {0,1}
M is a constant: the largest difference between the two curves (i.e. M=|f(xmax)−g(xmax)|). δ is a binary variable. I assumed we are minimizing cost and that the quadratic function is convex.
This construct implements
min cost
cost ≥ f(x) or cost ≥ g(x)
The solver will always drop the most expensive function, and keep the cheapest. In your picture this is exactly what we want: on the left of x0 the quadratic function is the cheapest, and on the right of x0, the linear function is cheaper. This formulation will automatically pick the cheaper option.
I have implemented the Serial Pollard Rho Algorithm for solving Elliptic curve discrete log problem . Now I am try to parallelize it using the Parallel Pollard Rho Algorithm.
so I just need some help to understand what kind property I can use for selecting distinguished points for collision detection. It would be a great help if some examples can be suggested also.
You could use any property. The thing to get right is the probability for some point to be a distinguished point. For example if we want one distinguished* point per 2^32 points, we could define a distinguished point as a point which has all last x 32 bits 0.
For example in Sage with point P:
>>> P.xy()[0].lift() & 0xffffffff == 0
True/False
In the normal case, this will do, but I admit that this is not really ideal when you are calculating elliptic curve arithmetic in a projective or Jacobian coordinate system, because you will have to do an inversion for every distinguished-point-test.
Is the Pearson correlation coefficient -- with one vector, x, exogenous and another vector, y, as a choice variable -- a suitable quadratic objective function for quadratic programming solvers like Gurobi?
A quick Google search for "Gurobi objective function" shows that Gurobi has an API to set an objective function that accepts a linear or quadratic expression. That is quite expected because quadratic programming is, by definition, an optimization of a quadratic function, with the math behind the methods specifically designed for this class (like, working directly with the Q coefficient matrix and the c vector rather than the raw function).
I didn't look into details too much, but I can see that Pearson product-moment correlation coefficient appears to be not a quadratic but a rational function. So, if your specific case can't be simplified to that, no.
I cannot say anything about other solvers because each one is an independent product and has to be considered separately.
Since your function appears to be piecewise continuous and infinitely differentiable, you're probably interested in general-purpose gradient methods instead.
I have a question concerning NumPy module linalg.lstsq(a,b). There is any possibility to check how fast this method is finding convergence? I mean some of characteristics which indicate how fast computation is going to convergence?
Thank you in advanced for brain storm.
The Numpy function linalg.lstsq uses singular value decomposition (SVD) to solve the least-square problem. Thus, if your matrix A is n by n, it will require n^3 flops.
More precisely, I think that the function uses the Householder Bidiagonalization to compute the SVD and so, if your matrix is m by n, the complexity will be O(max(m, n) * min(m, n)^2).