I want to implement scalar multiplication in which I have to multiply a point of elliptic curve with a negative number using CRYPTOPP library, but I'm getting an error while doing this.
Am I supposed to take the mod of that negative number manually and then multiply it with the point? or is there any function which will perform that task for me?
ECP::Point ECP::ScalarMultiply(const Point & a,
const Integer & e)const
As the parameter is integer so it should take negative value as well but it is giving an error:
Algebra.cpp
CRYPTOPP_ASSERT(expBegin->NotNegative())
Related
I have a portfolio optimization problem where my objective function is the mean divided by the standard deviation.
The variance is the difference of two random variables so is computed as Var(X) + Var(Y) - 2 * Cov(X, Y). The variance term is specified as above, where w represents the portfolio selection, capital sigma is a covariance matrix, and sigma sub delta g is a vector of covariances related to the second random variable. The problem is that CVXPY doesn't consider the last term there to be nonnegative because some of the covariance terms are negative. Obviously, I know that the variance will always be nonnegative, so I believe that this should work as a quasiconvex problem. Is there any way to tell CVXPY that this variance term will always be positive?
I get some trouble when I try to get the derivative of phi(x) by using the FFT.
l1=nk(T,n0)
l2=nk(T,n0)
l3=tetak(T,n0)
l4=tetak(T,n0)
dn=np.fft.irfft(l1+complex(0,1)*l2)*(N-2)/4
dtk=np.fft.irfft(l3+complex(0,1)*l4)*(N-2)/4
phi_x=np.sqrt(dn)*np.exp(complex(0,1)*dtk)
Here it is how I get phi(x) : I create my function phi(k) in the fourier space, and it is a symmetric function such that his fft has to be real.
This is why I used np.fft.irfft.
So my function is defined as phi(x)= np.fft.irfft(np.real(phi(k))+ 1j*np.imag(phi_k))
(I have written here phi(k) just to be clear).
l1,l2,l3,l4 are just lists of my fourier coefficients with length L, and the number (N-2)/4 is just to normalize.
And N=1000 here
Then I want to compute the second derivative. Thus, I apply a fft; then I multiply by -k**2 and then i take the ifft :
derivative=np.fft.ifft(-k*k*np.fft.fft(phi_x))
I know I have to be careful because np.fft.irfft gives me a length of (2*N-2).
I've already tried :
k_tf1=np.arange(-999,999)*(2*np.pi/L)
999 because of the 2N-2 length of phi(x)
It doesn't work at all
k_tf1 = np.fft.ifftshift(np.arange(-999, 999))
it doesnt'work too
k_tf1=np.fft.fftfreq(phi_x.size,2*np.pi/(L))
It seems to work, but my values are too high.
I also try to add k_tf1=np.fft.fftshift(k_tf1), but it doesn't work.
If someone know the solution ,i would really appreciate !
I have two - likely simple - questions that are bothering me, both related to quadratic programming:
1). There are two "standard" forms of the objective function I have found, differing by multiplication of negative 1.
In the R package quadprog, the objective function to be minimized is given as −dTb+12bTDb and in Matlab the objective is given as dTb+12bTDb. How can these be the same? It seems that one has been multiplied through by a negative 1 (which as I understand it would change from a min problem to a max problem.
2). Related to the first question, in the case of using quadprog for minimizing least squares, in order to get the objective function to match the standard form, it is necessary to multiply the objective by a positive 2. Does multiplication by a positive number not change the solution?
EDIT: I had the wrong sign for the Matlab objective function.
Function f(b)=dTb is a linear function thus it is both convex and concave. From optimization standpoint it means you can maximize or minimize it. Nevertheless minimizer of −dTb+12bTDb will be different from dTb+12bTDb, because there is additional quadratic term. Matlab implementation will find the one with plus sign. So if you are using different optimization software you will need to change d→−d to get the same result.
The function −dTb+12bTDb where D is symmetric and convex and thus has unique minimum. In general that is called standard quadratic programming form, but that doesn't really matter. The other function dTb−12bTDb is concave function which has unique maximum. It is easy to show that for, say, bounded function f(x) from above the following holds:
argmaxxf=argminx−f
Using the identity above value b∗1 where −dTb+12bTDb achieves minimum is the same as the value b∗2 which achieves maximum at dTb−12bTDb, that is b∗1=b∗2.
Programmatically it doesn't matter if you are minimizing −dTb+12bTDb or maximizing the other one. These are implementation-dependent details.
No it does not. ∀α>0 if x∗=argmaxxf(x), then x∗=argmaxxαf(x). This can be showed by contradiction.
I am trying to create random numbers from a lognormal distribution using numpy/scipy.
The mean is given as 2000 and sigma as 800.
If I create my random valus using numpy.random.lognormal(mean=2000, sigma=800, size=10000)
all I get is very high or inf numbers.
Is there a way to work around this?
Be careful: the mean and sigma arguments correspond to the distribution of the log of the lognormal distribution; the actual arithmetic mean of the distribution is exp(mean + sigma**2/2), which evaluates to inf in standard double precision floating point when mean=2000 and sigma=800.
See
http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.lognormal.html#numpy.random.lognormal
and
http://en.wikipedia.org/wiki/Log-normal_distribution
for more details.
I'm trying to create a software that shows the angle between two vectors and it's not working when then are equal to (1,1,2), hence the modulus of this vector is sqrtf(6) which is rouding to 2.449490 and it should be 2.44948974278318.
Is there a way to increase precision of this operation?
In the next steps of my software I make this operation:
float angle = acos(dot/(modulus1*modulus2));
If modulus1 == modulus 2, then modulus1*modulus2 = dot, but it's not happening with some values.
I hope I made myself clear.
Thanks in advance,
Gruber
You can use double if you want greater precision. However, note that the == operation on floating point numbers never work the way they do with integral types. Use an epsilon to adjust for minor differences.