I have a non-convex quadratic optimization problem of type
x' * B * x,
where all entries of x are between 0 and 1 and the sum of all entries is identical to 1.
In scipy.optimize I would try to solve this optimization problem via
import numpy as np
from scipy.optimize import minimize, LinearConstraint
N = 2 # dimension 2 for this example
B = np.array([[2,-1],[-1,-1]]) # symmetric, but indefinite matrix
fnc = lambda x: x.T # B # x
res = minimize(fnc, x0 = np.ones((N,))/N, bounds = [(0,1) for m in range(N)], constraints = (LinearConstraint(np.ones((N,)),0.99, 1.01)))
So I start with initial guess [0.5, 0.5], I apply bounds (0,1) on each dimension and the equality constraint is handled by a very narrow double inequality constraint.
Now I would like to translate this to mystic because scipy does not work well with high-dimensional non-convex settings (which I am interested in).
What I was not able to find out is how to write the constraints in a form such that I only need to supply the matrix B, with variable dimension. All examples in mystic which I found so far do something like this:
def objective(x):
x0,x1,x2,x3,x4,x5,x6,x7,x8,x9 = x
return x0**2 + x1**2 + x0*x1 - 14*x0 - 16*x1 + (x2-10)**2 + \
4*(x3-5)**2 + (x4-3)**2 + 2*(x5-1)**2 + 5*x6**2 + \
7*(x7-11)**2 + 2*(x8-10)**2 + (x9-7)**2 + 45.0
bounds = [(-10,10)]*10
from mystic.symbolic import generate_constraint, generate_solvers, simplify
from mystic.symbolic import generate_penalty, generate_conditions
equations = """
4.0*x0 + 5.0*x1 - 3.0*x6 + 9.0*x7 - 105.0 <= 0.0
10.0*x0 - 8.0*x1 - 17.0*x6 + 2.0*x7 <= 0.0
-8.0*x0 + 2.0*x1 + 5.0*x8 - 2.0*x9 - 12.0 <= 0.0
3.0*(x0-2)**2 + 4.0*(x1-3)**2 + 2.0*x2**2 - 7.0*x3 - 120.0 <= 0.0
5.0*x0**2 + 8.0*x1 + (x2-6)**2 - 2.0*x3 - 40.0 <= 0.0
0.5*(x0-8)**2 + 2.0*(x1-4)**2 + 3.0*x4**2 - x5 - 30.0 <= 0.0
x0**2 + 2.0*(x1-2)**2 - 2.0*x0*x1 + 14.0*x4 - 6.0*x5 <= 0.0
-3.0*x0 + 6.0*x1 + 12.0*(x8-8)**2 - 7.0*x9 <= 0.0
"""
cf = generate_constraint(generate_solvers(simplify(equations, target=['x5','x3'])))
pf = generate_penalty(generate_conditions(equations))
This is highly verbose and needs manual insertion of all the constraints and parameters etc. as a string which I would like to avoid: The dimensionality and the form of the matrix B will be different each time I need to run the optimization method. What I'd like to have (in a perfect world) would be something like
def objective(x):
return x # B # x # numpy syntax
equations = """
np.ones((1,N)) # x == 1.0
"""
# constraint in a form which can handle variable dimension of x
Is that possible?
Mystic uses lists, by default, so you have to convert to an array in the cost function. There are a lot of other ways to create constraints without using symbolic strings, and in your particular case, there's one that works out of the box. I'd do something like this:
>>> import mystic as my
>>> import numpy as np
>>> N = 2 # dimension 2 for this example
>>> B = np.array([[2,-1],[-1,-1]]) # symmetric, but indefinite matrix
>>> c = my.constraints.normalized()(lambda x:x)
>>> bounds = [(0,1)]*N
>>> mon = my.monitors.VerboseMonitor(10)
>>> fnc = lambda x: np.array(x).T # B # x
>>> res = my.solvers.diffev2(fnc, x0=bounds, npop=10, bounds=bounds, ftol=1e-4, gtol=100, full_output=1, itermon=mon, constraints=c)
Generation 0 has ChiSquare: -0.920151
Generation 10 has ChiSquare: -0.999667
Generation 20 has ChiSquare: -1.000000
Generation 30 has ChiSquare: -1.000000
Generation 40 has ChiSquare: -1.000000
Generation 50 has ChiSquare: -1.000000
Generation 60 has ChiSquare: -1.000000
Generation 70 has ChiSquare: -1.000000
Generation 80 has ChiSquare: -1.000000
Generation 90 has ChiSquare: -1.000000
Generation 100 has ChiSquare: -1.000000
Generation 110 has ChiSquare: -1.000000
STOP("ChangeOverGeneration with {'tolerance': 0.0001, 'generations': 100}")
Optimization terminated successfully.
Current function value: -1.000000
Iterations: 113
Function evaluations: 1140
>>> res[0]
array([1.07421473e-07, 9.99999993e-01])
>>> res[1]
-1.0000001999996087
>>> my.scripts.log_reader(mon)
Related
I have a dataframe named, "df", with 4 columns. Three columns are independent variables: x1, x2, and x3. And, the other variable, y, is the dependent variable
I would like to calculate the distance, "pdist" between the dependent variable and each of the dependent variables, so I first converted each column to a numpy array as follows:
y = df[["y"]].values
x1 = df[["x1"]].values
x2 = df[["x2"]].values
x3 = df[["x3"]].values
When I feed these arrays through this coding pipeline I got from Github:
import numpy as np
from scipy.spatial.distance import pdist
def distance_correlation(Xval, Yval, pval=True, nruns=500):
X, Y = np.atleast_1d(Xval),np.atleast_1d(Yval)
if np.prod(X.shape) == len(X):X = X[:, None]
if np.prod(Y.shape) == len(Y):Y = Y[:, None]
X, Y = np.atleast_2d(X),np.atleast_2d(Y)
n = X.shape[0]
if Y.shape[0] != X.shape[0]:raise ValueError('Number of samples must match')
a, b = squareform(pdist(X)),squareform(pdist(Y))
A = a - a.mean(axis=0)[None, :] - a.mean(axis=1)[:, None] + a.mean()
B = b - b.mean(axis=0)[None, :] - b.mean(axis=1)[:, None] + b.mean()
dcov2_xy = (A * B).sum() / float(n * n)
dcov2_xx = (A * A).sum() / float(n * n)
dcov2_yy = (B * B).sum() / float(n * n)
dcor = np.sqrt(dcov2_xy) / np.sqrt(np.sqrt(dcov2_xx) * np.sqrt(dcov2_yy))
if pval:
greater = 0
for i in range(nruns):
Y_r = copy.copy(Yval)
np.random.shuffle(Y_r)
if distcorr(Xval, Y_r, pval=False) > dcor:
greater += 1
return (dcor, greater / float(nruns))
else:
return dcor
distance_correlation(x1, y, pval=True, nruns=500)
I get this error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-32-c720c9df4e97> in <module>
----> 1 distance_correlation(bop_sp500, price, pval=True, nruns=500)
<ipython-input-17-e0b3aea12c32> in distance_correlation(Xval, Yval, pval, nruns)
9 n = X.shape[0]
10 if Y.shape[0] != X.shape[0]:raise ValueError('Number of samples must match')
---> 11 a, b = squareform(pdist(X)),squareform(pdist(Y))
12 A = a - a.mean(axis=0)[None, :] - a.mean(axis=1)[:, None] + a.mean()
13 B = b - b.mean(axis=0)[None, :] - b.mean(axis=1)[:, None] + b.mean()
~\Anaconda3\lib\site-packages\scipy\spatial\distance.py in pdist(X, metric, *args, **kwargs)
1997 s = X.shape
1998 if len(s) != 2:
-> 1999 raise ValueError('A 2-dimensional array must be passed.')
2000
2001 m, n = s
ValueError: A 2-dimensional array must be passed..
Could anyone identify where I am going wrong? I know the error originates from the manner in which I created my numpy arrays. But, I have no clues on fixing it.
Please explain it with examples that use my variable definitions. I am new to Python
Ok, so I finally managed to figure out the cause of the problem I faced:
The Numpy array that was being fed into the helper function was a 2d array.
While the helper function required a "Numpy vector"; i.e. a 1d Numpy array.
The best way to create it is to use the numpy.ravel() function. Hence, for my datasets, the code would be as follows (I have broken down the steps for simplicity):
# Create Arrays
y = df[["y"]].values
x1 = df[["x1"]].values
x2 = df[["x2"]].values
x3 = df[["x3"]].values
# Ravel Them
y = y.ravel()
x1 = x1.ravel()
x2 = x2.ravel()
x3 = x3.ravel()
I have following system of nonlinear equations:
This is very similar to Cholesky’s decomposition, but unfortunately it’s not it.
I have found very familiar solution: Solve a system of non-linear equations in Python (scipy.optimize.fsolve) but I don't know how to dynamically setup all equations of my system.
How to solve this system within Numpy, Scipy or any other packages in Python?
SymPy is the package that you're looking for. Here is a function that dynamically sets up your equations as described in your system:
import sympy as sp
def not_choleskys_decomposition(N):
# Initialisation of list of variables.
c = []
K = []
for n in range(N + 1):
c.append(sp.Symbol("c_{}".format(n), real=True))
K.append(sp.Symbol("K_{}".format(n), real=True))
# Setup your N+1 equations.
equations = []
for n in range(N + 1):
num_c = N + 1 - n
c_terms = [c_i * c_j for c_i, c_j in zip(c[:num_c], c[n:][:num_c])]
lhs = sp.Add(*c_terms)
equations.append(sp.Eq(lhs, K[n]))
return equations, c, K
Printing for verification can be done using the sympy.latex function which converts the math directly to LaTeX or the sympy.pprint function.
>>> test, _, _ = not_choleskys_decomposition(4)
>>> for t in test:
... sp.pprint(t)
c₀⋅c₀ + c₁⋅c₁ + c₂⋅c₂ + c₃⋅c₃ + c₄⋅c₄ = K₀
c₀⋅c₁ + c₁⋅c₂ + c₂⋅c₃ + c₃⋅c₄ = K₁
c₀⋅c₂ + c₁⋅c₃ + c₂⋅c₄ = K₂
c₀⋅c₃ + c₁⋅c₄ = K₃
c₀⋅c₄ = K₄
Replacing your variables can be done with the .subs method which belongs to each sympy.Eq object:
>>> equations, c_sym, K_sym = not_choleskys_decomposition(4)
>>> dict_replace = {c_sym[0]: 1.0}
>>> new_equation = equations[0].subs(dict_replace)
>>> sympy.pprint(new_equation)
c₁⋅c₂ + 1.0⋅c₁ + c₂⋅c₃ + c₃⋅c₄ = K₁
Here's an example function that replaces your list of c or K.
def substitute_sym(equations, sym_list, val_list):
# Confirm all dimensions are consistent.
try:
assert len(equations) == len(sym_list) == len(val_list)
except AssertionError:
raise IndexError("Inconsistent dimensions.")
# Replace c symbols with values in equations.
substituted_equations = []
for eq in equations:
_eq = eq.subs(dict(zip(sym_list, val_list)))
substituted_equations.append(_eq)
return substituted_equations
Testing it on c:
>>> equations, c_sym, K_sym = not_choleskys_decomposition(4)
>>> test_2 = substitute_sym(test_1, c_sym, [1] * 5)
>>> for t in test_2:
... sp.pprint(t)
5 = K₀
4 = K₁
3 = K₂
2 = K₃
1 = K₄
Testing it on K:
>>> equations, c_sym, K_sym = not_choleskys_decomposition(4)
>>> K_vals = [1, 1.4, 0.2, 0.5, 3.0]
>>> test_3 = substitute_sym(test_1, K_sym, K_vals)
>>> for t in test_3:
... sp.pprint(t)
c₀⋅c₀ + c₁⋅c₁ + c₂⋅c₂ + c₃⋅c₃ + c₄⋅c₄ = 1
c₀⋅c₁ + c₁⋅c₂ + c₂⋅c₃ + c₃⋅c₄ = 1.4
c₀⋅c₂ + c₁⋅c₃ + c₂⋅c₄ = 0.2
c₀⋅c₃ + c₁⋅c₄ = 0.5
c₀⋅c₄ = 3.0
Solving your set of equation can be done with sympy.solvers.solveset.nonlinsolve(system, *symbols) documented here.
Solving for c given K
>>> from sympy.solvers.solveset import nonlinsolve
>>> test_solve, c_sym, K_sym = not_choleskys_decomposition(1)
>>> test_replaced = substitute_sym(test_solve, K_sym, [sp.Rational(0.5)] * 2)
>>> solved = nonlinsolve(test_replaced, c_sym)
>>> sp.pprint(solved)
⎧⎛ ⎛ 2⎞ ⎞ ⎛ ⎛
⎪⎜ ⎜ ⎛ √6 √2⋅ⅈ⎞ ⎟ ⎛ √6 √2⋅ⅈ⎞ √6 √2⋅ⅈ⎟ ⎜ ⎜ ⎛ √6 √2⋅ⅈ
⎨⎜-⎜-1 + 2⋅⎜- ── - ────⎟ ⎟⋅⎜- ── - ────⎟, - ── - ────⎟, ⎜-⎜-1 + 2⋅⎜- ── + ────
⎪⎝ ⎝ ⎝ 4 4 ⎠ ⎠ ⎝ 4 4 ⎠ 4 4 ⎠ ⎝ ⎝ ⎝ 4 4
⎩
2⎞ ⎞ ⎛ ⎛ 2⎞
⎞ ⎟ ⎛ √6 √2⋅ⅈ⎞ √6 √2⋅ⅈ⎟ ⎜ ⎜ ⎛√6 √2⋅ⅈ⎞ ⎟ ⎛√6 √2⋅ⅈ⎞ √6 √2⋅
⎟ ⎟⋅⎜- ── + ────⎟, - ── + ────⎟, ⎜-⎜-1 + 2⋅⎜── - ────⎟ ⎟⋅⎜── - ────⎟, ── - ───
⎠ ⎠ ⎝ 4 4 ⎠ 4 4 ⎠ ⎝ ⎝ ⎝4 4 ⎠ ⎠ ⎝4 4 ⎠ 4 4
⎞ ⎛ ⎛ 2⎞ ⎞⎫
ⅈ⎟ ⎜ ⎜ ⎛√6 √2⋅ⅈ⎞ ⎟ ⎛√6 √2⋅ⅈ⎞ √6 √2⋅ⅈ⎟⎪
─⎟, ⎜-⎜-1 + 2⋅⎜── + ────⎟ ⎟⋅⎜── + ────⎟, ── + ────⎟⎬
⎠ ⎝ ⎝ ⎝4 4 ⎠ ⎠ ⎝4 4 ⎠ 4 4 ⎠⎪
⎭
I hope this helps get you jump started with SymPy if you choose to use it. I have rarely used its nonlinear solver so I cannot guarantee anything past the examples. Arguments are also relevant and solvers are temperamental. You'll notice I replaced K with sp.Rational(0.5) for each index. It was done to avoid an error that is thrown otherwise when defining float type for some solvers. Good luck.
EDIT:
Also note that you don't need to use the solvers in SymPy. I rarely do. I use the package for the symbolic mathematics in Python for LaTeX and equation manipulation.
EDIT:
Making it work with scipy.optimize.fsolve
from sympy.utilities.lambdify import lambdify
from scipy.optimize import fsolve
def lambdify_equations_for_solving_c(equations, c_sym, K_sym, K_val):
f = []
equations_subbed = substitute_sym(equations, K_sym, K_val)
for eq in equations_subbed:
# Note that I reformulate equation into an expression here
# for determining the roots.
f.append(lambdify(c_sym, eq.args[0] - eq.args[1]))
return f
N = 4
equations, c_sym, K_sym = not_choleskys_decomposition(N)
K_vals = [0.1, 0, 0.1, 0, 0]
f = lambdify_equations_for_solving_c(equations, c_sym, K_sym, K_vals)
def fsolve_friendly(p):
return [_f(*p) for _f in f]
c_sol = fsolve(fsolve_friendly, x0=(0, 0.1, -1.1, 0, 0))
fsolve returns warnings for non-converged results.
/home/ggarrett/anaconda3/envs/sigh/lib/python3.7/site-packages/scipy/optimize/minpack.py:162: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last five Jacobian evaluations.
warnings.warn(msg, RuntimeWarning)
That's a complete answer on how to form your equations to be solved as python callables. The solver routine is another challenge itself.
I have a data frame with a mixture of numeric (15 fields) and categorical (5 fields) data.
I can create a complete distance matrix of the numeric fields following create distance matrix using own calculation pandas
I want to include the categorical fields as well.
Using as template:
import scipy
from scipy.spatial import distance_matrix
from scipy.spatial.distance import squareform
from scipy.spatial.distance import pdist
df2=pd.DataFrame({'col1':[1,2,3,4],'col2':[5,6,7,8],'col3':['cat','cat','dog','bird']})
df2
pd.DataFrame(squareform(pdist(df2.values, lambda u, v: np.sqrt((w*(u-v)**2).sum()))), index=df2.index, columns=df2.index)
in the squareform calculation, I would like to include the test np.where(u[2]==v[2], 0, 10) (as well as with the other categorical columns)
Hpw do I modify the lambda function to carry out this test as well
Here, the distance between [0,1]
= sqrt((2-1)^2 + (6-5)^2 + (cat - cat)^2)
= sqrt(1 + 1 + 0)
and the distance between [0,2]
= sqrt((3-1)^2 + (7-5)^2 + (dog - cat)^2)
= sqrt(4 + 4 + 100)
etc.
Can anyone suggest how I can implement this algorithm?
import pandas as pd
import numpy as np
from scipy.spatial.distance import pdist, squareform
df2 = pd.DataFrame({'col1':[1,2,3,4],'col2':[5,6,7,8],'col3':['cat','cat','dog','bird']})
def fun(u,v):
const = 0 if u[2] == v[2] else 10
return np.sqrt((u[0]-v[0])**2 + (u[1]-v[1])**2 + const**2)
pd.DataFrame(squareform(pdist(df2.values, fun)), index=df2.index, columns=df2.index)
Result:
0 1 2 3
0 0.000000 1.414214 10.392305 10.862780
1 1.414214 0.000000 10.099505 10.392305
2 10.392305 10.099505 0.000000 10.099505
3 10.862780 10.392305 10.099505 0.000000
Link is here : https://www.csie.ntu.edu.tw/~r01922136/slides/ffm.pdf (slides 5-6)
Given the following matrices:
X : n * d
W : d * k
Is there an efficient way to calculate the n x 1 matrix using only matrix operations (eg. numpy, tensorflow), where the jth element is :
EDIT:
Current attempt is this, but obviously it's not very space efficient, as it requires storing matrices of size n*d*d :
n = 1000
d = 256
k = 32
x = np.random.normal(size=[n,d])
w = np.random.normal(size=[d,k])
xxt = np.matmul(x.reshape([n,d,1]),x.reshape([n,1,d]))
wwt = np.matmul(w.reshape([1,d,k]),w.reshape([1,k,d]))
output = xxt*wwt
output = np.sum(output,(1,2))
Avoid large temporary arrays
Not all types of algorithms are that easily or obviously to vectorize. The np.sum(xxt*wwt) can be rewritten using np.einsum. This should be faster than your solution, but has some other limitations (eg. no multithreading).
I would therefor suggest using a compiler like Numba.
Example
import numpy as np
import numba as nb
import time
#nb.njit(fastmath=True,parallel=True)
def factorization_nb(w,x):
n = x.shape[0]
d = x.shape[1]
k = w.shape[1]
output=np.empty(n,dtype=w.dtype)
wwt=np.dot(w.reshape((d,k)),w.reshape((k,d)))
for i in nb.prange(n):
sum=0.
for j in range(d):
for jj in range(d):
sum+=x[i,j]*x[i,jj]*wwt[j,jj]
output[i]=sum
return output
def factorization_orig(w,x):
n = x.shape[0]
d = x.shape[1]
k = w.shape[1]
xxt = np.matmul(x.reshape([n,d,1]),x.reshape([n,1,d]))
wwt = np.matmul(w.reshape([1,d,k]),w.reshape([1,k,d]))
output = xxt*wwt
output = np.sum(output,(1,2))
return output
Mesuring Performance
n = 1000
d = 256
k = 32
x = np.random.normal(size=[n,d])
w = np.random.normal(size=[d,k])
#first call has some compilation overhead
res_1=factorization_nb(w,x)
t1=time.time()
for i in range(100):
res_1=factorization_nb(w,x)
#res_2=factorization_orig(w,x)
print(time.time()-t1)
Timings
factorization_nb: 4.2 ms per iteration
factorization_orig: 460 ms per iteration (110x speedup)
For an einsum implemtnation in pytorch, it would be something like
V = torch.randn([50, 10])
x = torch.randn([50])
result = (torch.einsum('ik,jk,i,j->', V, V, x, x)-torch.einsum('ik,ik,i,i->', V, V, x, x))/2
where we subtract the contribution from the feature weight being dotted with itself.
I have a system of ordinary differential equations with external deterministic inputs (controls) and stochastic components. How can I safely (performing good code style) pass these additional input arguments to equation function through tf.contrib.integrate.odeint() besides initial state? If there is such a way to do it. Or defining them in the outer scope and referring to them from within the equation function is the only way to do it so far?
I have the same problem when I try to simulate an Hindmarsh-Rose model by using odeint solver. I would like to inject a current in the equation and do not know how to do it.
Here after a basic example :
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
## Model parameters
# v' = u - a.v^3 + b.v^2 + I -z
# u' = c - d.v^2 -u
# z' = epsilon_z.(s.(v-v0)-z
#parameters for v and u terms
a = 1.0
b = 3.0
c = -3.0
d = 5.0
v0 = -1.4
s = 4.0
epsilon_z = 0.002
# init tensions
v_init = -3.0
u_init = 0.0
z_init = +0.9
#injected current that is currently fixed
Id = 5
# What I would like to do :
# def I(t):
# if 0.0 <= t < 300.0:
# return 0.0
# elif 300.0 <= t < 1700.0:
# return Iech
# return 0.0
def HR_equation(state, t):
v, u, z = tf.unstack(state)
dv = -a * v*v*v + b * v*v + u - z + I
du = -d * v*v - u + c
dz = epsilon_z * ( s*(v - v0)- z
return tf.stack([dv, du, dz])
init_state = tf.constant([v_init, u_init, z_init], dtype=tf.float64)
t = np.linspace(0, 2000, num=5000)
tensor_state, tensor_info = tf.contrib.integrate.odeint(HR_equation,
init_state, t, full_output=True)
sess = tf.Session()
state, info = sess.run([tensor_state, tensor_info])
v, u, z = state.T
plt.plot(v, u)