Linear programming : how to turn a decision variable (binary) as 1 if values in an array exceeds a certain threshold - optimization

I have an array which holds a linear expression in terms of decision variable. Let's say decision variables take values such that the array = [1.7 , 0.3, 0]. Now what I want is the following :
1) If any of the values from the above array is > 0.5, then decision variables : y1 (binary) = 1, else 0.
so y1 should turn out to be [1, 0, 0]
2) If any of the values from the above array is > 0.5, then decision variables : y2 (real-valued) = the
value, else 0. Hence y2 = [1.7, 0, 0]
3) If any value in the array is > 0 and <= 0.5, then decision variables : y3 (binary) = 1, else 0. Hence
y3 = [0, 1, 0]
I know that big M formulation can help, but I am struggling to find a way.
Can somebody please help me with the formulation of above 3 points. I am working on pyomo and gurobi for programming the problem.

Related

Numpy multiply NxD by NxK to KxNxD

So I have a A=NxD matrix that I'm trying to multiply each vector of length D by the B=NxK matrix to make a KxNxD matrix. As in all N vectors should be multiplied by K different perspective Ns in B.
I tried
A[:,None] * B
A[:,None,:] * B[None,:,:]
A * B[None, :]
They all kind of give the same thing, which is an NxNxK or NxNxD
So for example
A = [[1,0], [1,1]]
B = [[1, 1, 1], [2,2,2]]
Should give C which will be
C[0] = [[1,1,1], [2,2,2]]
Because a[0,:] = [[1],[1]] times B and
C[1] = [[0, 0, 0], [2,2,2]]

Converting a Segemented Ground Truth to a Contour Image efficiently with Numpy

Suppose I have a segmented image as a Numpy array, where each entry in the image is a number from 1, ... C, C+1 where C is the number of segmentation classes, and class C+1 is some background class. I want to find an efficient way to convert this to a contour image (a binary image where a contour pixel will have value 1, and the rest will have values 0), so that any pixel who has a neighbor in its 8-neighbourhood (or 4-neighbourhood) will be a contour pixel.
The inefficient way would be something like:
def isValidLocation(i, j, image_height, image_width):
if i<0:
return False
if i>image_height-1:
return False
if j<0:
return False
if j>image_width-1:
return False
return True
def get8Neighbourhood(i, j, image_height, image_width):
nbd = []
for height_offset in [-1, 0, 1]:
for width_offset in [-1, 0, 1]:
if isValidLocation(i+height_offset, j+width_offset, image_height, image_width):
nbd.append((i+height_offset, j+width_offset))
return nbd
def getContourImage(seg_image):
seg_image_height = seg_image.shape[0]
seg_image_width = seg_image.shape[1]
contour_image = np.zeros([seg_image_height, seg_image_width], dtype=np.uint8)
for i in range(seg_image_height):
for j in range(seg_image_width):
nbd = get8Neighbourhood(i, j, seg_image_height, seg_image_width)
for (m,n) in nbd:
if seg_image[m][n] != seg_image[i][j]:
contour_image[i][j] = 1
break
return contour_image
I'm looking for a more efficient "vectorized" way of achieving this, as I need to be able to compute this at run time on batches of 8 images at a time in a deep learning context. Any insights appreciated. Visual Example Below. The first image is the original image overlaid over the ground truth segmentation mask (not the best segmentation admittedly...), the second is the output of my code, which looks good, but is way too slow. Takes me about 10 seconds per image with an intel 9900K cpu.
Image Credit from SUN RGBD dataset.
This might work but it might have some limitations which I cannot be sure of without testing on the actual data, so I'll be relying on your feedback.
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
# some sample data with few rectangular segments spread out
seg = np.ones((100, 100), dtype=np.int8)
seg[3:10, 3:10] = 20
seg[24:50, 40:70] = 30
seg[55:80, 62:79] = 40
seg[40:70, 10:20] = 50
plt.imshow(seg)
plt.show()
Now to find the contours, we will convolve the image with a kernel which should give 0 values when convolved within the same segment of the image and <0 or >0 values when convolved over image regions with multiple segments.
# kernel for convolving
k = np.array([[1, -1, -1],
[1, 0, -1],
[1, 1, -1]])
convolved = ndimage.convolve(seg, k)
# contour pixels
non_zeros = np.argwhere(convolved != 0)
plt.scatter(non_zeros[:, 1], non_zeros[:, 0], c='r', marker='.')
plt.show()
As you can see in this sample data the kernel has a small limitation and misses identifying two contour pixels caused due to symmetric nature of data (which I think would be a rare case in actual segmentation outputs)
For better understanding, this is the scenario(occurs at top left and bottom right corners of the rectangle) where the kernel convolution fails to identify the contour i.e. misses one pixel
[ 1, 1, 1]
[ 1, 1, 1]
[ 1, 20, 20]
Based on #sai's idea I came up with this snippet, which yielded the same result much, much faster than my original code. Runs in 0.039 seconds, which when compared to close to 8-10 seconds for the original I'd say is quite a speed-up!
filters = []
for i in [0, 1, 2]:
for j in [0, 1, 2]:
filter = np.zeros([3,3], dtype=np.int)
if i ==1 and j==1:
pass
else:
filter[i][j] = -1
filter[1][1] = 1
filters.append(filter)
def getCountourImage2(seg_image):
convolved_images = []
for filter in filters:
convoled_image = ndimage.correlate(seg_image, filter, mode='reflect')
convolved_images.append(convoled_image)
convoled_images = np.add.reduce(convolved_images)
seg_image = np.where(convoled_images != 0, 255, 0)
return seg_image

Best way to get joint probability matrix from categorical data

My goal is to get joint probability (here we use count for example) matrix from data samples. Now I can get the expected result, but I'm wondering how to optimize it. Here is my implementation:
def Fill2DCountTable(arraysList):
'''
:param arraysList: List of arrays, length=2
each array is of shape (k, sampleSize),
k == 1 (or None. numpy will align it) if it's single variable
else k for a set of variables of size k
:return: xyJointCounts, xMarginalCounts, yMarginalCounts
'''
jointUniques, jointCounts = np.unique(np.vstack(arraysList), axis=1, return_counts=True)
_, xReverseIndexs = np.unique(jointUniques[[0]], axis=1, return_inverse=True) ###HIGHLIGHT###
_, yReverseIndexs = np.unique(jointUniques[[1]], axis=1, return_inverse=True)
xyJointCounts = np.zeros((xReverseIndexs.max() + 1, yReverseIndexs.max() + 1), dtype=np.int32)
xyJointCounts[tuple(np.vstack([xReverseIndexs, yReverseIndexs]))] = jointCounts
xMarginalCounts = np.sum(xyJointCounts, axis=1) ###HIGHLIGHT###
yMarginalCounts = np.sum(xyJointCounts, axis=0)
return xyJointCounts, xMarginalCounts, yMarginalCounts
def Fill3DCountTable(arraysList):
# :param arraysList: List of arrays, length=3
jointUniques, jointCounts = np.unique(np.vstack(arraysList), axis=1, return_counts=True)
_, xReverseIndexs = np.unique(jointUniques[[0]], axis=1, return_inverse=True)
_, yReverseIndexs = np.unique(jointUniques[[1]], axis=1, return_inverse=True)
_, SReverseIndexs = np.unique(jointUniques[2:], axis=1, return_inverse=True)
SxyJointCounts = np.zeros((SReverseIndexs.max() + 1, xReverseIndexs.max() + 1, yReverseIndexs.max() + 1), dtype=np.int32)
SxyJointCounts[tuple(np.vstack([SReverseIndexs, xReverseIndexs, yReverseIndexs]))] = jointCounts
SMarginalCounts = np.sum(SxyJointCounts, axis=(1, 2))
SxJointCounts = np.sum(SxyJointCounts, axis=2)
SyJointCounts = np.sum(SxyJointCounts, axis=1)
return SxyJointCounts, SMarginalCounts, SxJointCounts, SyJointCounts
My use scenario is to do conditional independence test over variables. SampleSize is usually quite big (~10k) and each variable's categorical cardinality is relatively small (~10). I still find the speed not satisfying.
How to best optimize this code, or even logic outside the code? I may have some thoughts:
The ###HIGHLIGHT### lines. On a single X I may calculate (X;Y1), (Y2;X), (X;Y3|S1)... for many times, so what if I save cache variable's (and conditional set's) {uniqueValue: reversedIndex} dictionary and its marginal count, and then directly get marginalCounts (no need to sum) and replace to get reverseIndexs (no need to unique).
How to further use matrix parallelization to do CITest in batch, i.e. calculate (X;Y|S1), (X;Y|S2), (X;Y|S3)... simultaneously?
Will torch be faster than numpy, on same CPU? Or on GPU?
It's an open question. Thank you for any possible ideas. Big thanks for your help :)
================== A test example is as follows ==================
xs = np.array( [2, 4, 2, 3, 3, 1, 3, 1, 2, 1] )
ys = np.array( [5, 5, 5, 4, 4, 4, 4, 4, 6, 5] )
Ss = np.array([ [1, 0, 0, 0, 1, 0, 0, 0, 1, 1],
[1, 1, 1, 0, 1, 0, 1, 0, 1, 0] ])
xyJointCounts, xMarginalCounts, yMarginalCounts = Fill2DCountTable([xs, ys])
SxyJointCounts, SMarginalCounts, SxJointCounts, SyJointCounts = Fill3DCountTable([xs, ys, Ss])
get 2D from (X;Y): xMarginalCounts=[3 3 3 1], yMarginalCounts=[5 4 1], and xyJointCounts (added axes name FYI):
xy| 4 5 6
--|-------
1 | 2 1 1
2 | 0 2 1
3 | 3 0 0
4 | 0 1 0
get 3D from (X;Y|{Z1,Z2}): SxyJointCounts is of shape 4x4x3, where the first 4 means the cardinality of {Z1,Z2} (00, 01, 10, 11 with respective SMarginalCounts=[3 3 1 3]). SxJointCounts is of shape 4x4 and SyJointCounts is of shape 4x3.

how to implement the variable array with one and zero in tensorflow

I'm totally new on tensorflow, and I just want to implement a kind of selection function by using matrices multiplication.
example below:
#input:
I = [[9.6, 4.1, 3.2]]
#selection:(single "1" value , and the other are "0s")
s = tf.transpose(tf.Variable([[a, b, c]]))
e.g. s could be [[0, 1, 0]] or [[0, 0, 1]] or [[1, 0, 0]]
#result:(multiplication)
o = tf.matul(I, s)
sorry for the poor expression,
I intend to find the 'solution' in distribution functions with different means and sigmas. (value range from 0 to 1).
so now, i have three variable i, j, index.
value1 = np.exp(-((index - m1[i]) ** 2.) / s1[i]** 2.)
value2 = np.exp(-((index - m2[j]) ** 2.) / s2[j]** 2.)
m1 = [1, 3, 5] s = [0.2, 0.4, 0.5]. #first graph
m2 = [3, 5, 7]. s = [0.5, 0.5, 1.0]. #second graph
I want to get the max or optimization of total value
e.g. value1 + value2 = 1+1 = 2 and one of the solutions: i = 2, j=1, index=5
or I could do this in the other module?

solving a sparse non linear system of equations using scipy.optimize.root

I want to solve the following non-linear system of equations.
Notes
the dot between a_k and x represents dot product.
the 0 in the first equation represents 0 vector and 0 in the second equation is scaler 0
all the matrices are sparse if that matters.
Known
K is an n x n (positive definite) matrix
each A_k is a known (symmetric) matrix
each a_k is a known n x 1 vector
N is known (let's say N = 50). But I need a method where I can easily change N.
Unknown (trying to solve for)
x is an n x 1 a vector.
each alpha_k for 1 <= k <= N a scaler
My thinking.
I am thinking of using scipy root to find x and each alpha_k. We essentially have n equations from each row of the first equation and another N equations from the constraint equations to solve for our n + N variables. Therefore we have the required number of equations to have a solution.
I also have a reliable initial guess for x and the alpha_k's.
Toy example.
n = 4
N = 2
K = np.matrix([[0.5, 0, 0, 0], [0, 1, 0, 0],[0,0,1,0], [0,0,0,0.5]])
A_1 = np.matrix([[0.98,0,0.46,0.80],[0,0,0.56,0],[0.93,0.82,0,0.27],[0,0,0,0.23]])
A_2 = np.matrix([[0.23, 0,0,0],[0.03,0.01,0,0],[0,0.32,0,0],[0.62,0,0,0.45]])
a_1 = np.matrix(scipy.rand(4,1))
a_2 = np.matrix(scipy.rand(4,1))
We are trying to solve for
x = [x1, x2, x3, x4] and alpha_1, alpha_2
Questions:
I can actually brute force this toy problem and feed it to the solver. But how do I do I solve this toy problem in such a way that I can extend it easily to the case when I have let's say n=50 and N=50
I will probably have to explicitly compute the Jacobian for larger matrices??.
Can anyone give me any pointers?
I think the scipy.optimize.root approach holds water, but steering clear of the trivial solution might be the real challenge for this system of equations.
In any event, this function uses root to solve the system of equations.
def solver(x0, alpha0, K, A, a):
'''
x0 - nx1 numpy array. Initial guess on x.
alpha0 - nx1 numpy array. Initial guess on alpha.
K - nxn numpy.array.
A - Length N List of nxn numpy.arrays.
a - Length N list of nx1 numpy.arrays.
'''
# Establish the function that produces the rhs of the system of equations.
n = K.shape[0]
N = len(A)
def lhs(x_alpha):
'''
x_alpha is a concatenation of x and alpha.
'''
x = np.ravel(x_alpha[:n])
alpha = np.ravel(x_alpha[n:])
lhs_top = np.ravel(K.dot(x))
for k in xrange(N):
lhs_top += alpha[k]*(np.ravel(np.dot(A[k], x)) + np.ravel(a[k]))
lhs_bottom = [0.5*x.dot(np.ravel(A[k].dot(x))) + np.ravel(a[k]).dot(x)
for k in xrange(N)]
lhs = np.array(lhs_top.tolist() + lhs_bottom)
return lhs
# Solve the system of equations.
x0.shape = (n, 1)
alpha0.shape = (N, 1)
x_alpha_0 = np.vstack((x0, alpha0))
sol = root(lhs, x_alpha_0)
x_alpha_root = sol['x']
# Compute norm of residual.
res = sol['fun']
res_norm = np.linalg.norm(res)
# Break out the x and alpha components.
x_root = x_alpha_root[:n]
alpha_root = x_alpha_root[n:]
return x_root, alpha_root, res_norm
Running on the toy example, however, only produces the trivial solution.
# Toy example.
n = 4
N = 2
K = np.matrix([[0.5, 0, 0, 0], [0, 1, 0, 0],[0,0,1,0], [0,0,0,0.5]])
A_1 = np.matrix([[0.98,0,0.46,0.80],[0,0,0.56,0],[0.93,0.82,0,0.27],
[0,0,0,0.23]])
A_2 = np.matrix([[0.23, 0,0,0],[0.03,0.01,0,0],[0,0.32,0,0],
[0.62,0,0,0.45]])
a_1 = np.matrix(scipy.rand(4,1))
a_2 = np.matrix(scipy.rand(4,1))
A = [A_1, A_2]
a = [a_1, a_2]
x0 = scipy.rand(n, 1)
alpha0 = scipy.rand(N, 1)
print 'x0 =', x0
print 'alpha0 =', alpha0
x_root, alpha_root, res_norm = solver(x0, alpha0, K, A, a)
print 'x_root =', x_root
print 'alpha_root =', alpha_root
print 'res_norm =', res_norm
Output is
x0 = [[ 0.00764503]
[ 0.08058471]
[ 0.88300129]
[ 0.85299622]]
alpha0 = [[ 0.67872815]
[ 0.69693346]]
x_root = [ 9.88131292e-324 -4.94065646e-324 0.00000000e+000
0.00000000e+000]
alpha_root = [ -4.94065646e-324 0.00000000e+000]
res_norm = 0.0