Related
I have a network written with tensorflow Keras, in part of my code I need to use scipy.cKDTree, so I decorated my function with #tf.function. When I want to make the tree I receive the following error. (Let me know if more details are required.)
The error happens when it tries to make cKDTree. The size of the pc2e is shape=(46080, 3).
In similar questions I found that it could be because of the Pillow version, I changed the version and didn't solve the error.
Also is there a better way to have KDTree in tensorflow?
TypeError: in user code:
/home/***/My_Models.py:731 var_layer *
tree2 = cKDTree(pc2e, leafsize=500, balanced_tree=False)
ckdtree.pyx:522 scipy.spatial.ckdtree.cKDTree.__init__ **
TypeError: __array__() takes 1 positional argument but 2 were given
Process finished with exit code 1
The function:
#tf.function
def var_layer(self, inputs, output): # output: x y z i j k w
inputs_v = tf.Variable(inputs)
pc1_raw, pc2_raw = tf.split(inputs_v, num_or_size_splits=2, axis=4)
# B x T x W x H x Channels
s0, s1, s2, s3, s4 = pc1_raw.shape[0], pc1_raw.shape[1], pc1_raw.shape[2], pc1_raw.shape[3], pc1_raw.shape[4]
pc1 = tf.reshape(pc1_raw[:, -1, :, :, 0:3], shape=[-1, s2 * s3, 3])
pc2 = tf.reshape(pc2_raw[:, -1, :, :, 0:3], shape=[-1, s2 * s3, 3])
# normal2 = tf.reshape(pc2_raw[:, -1, :, :, 3:6], [-1, s2 * s3, 3])
# normal1 = tf.reshape(pc1_raw[:, -1, :, :, 3:6], [-1, s2 * s3, 3])
Rq, Tr3 = tfg.dual_quaternion.to_rotation_translation(output)
R33 = tfg.rotation_matrix_3d.from_quaternion(Rq)
RT = tf.concat([R33, tf.expand_dims(Tr3, axis=2)], -1)
RT = tf.pad(RT, [[0, 0], [0, 1], [0, 0]], constant_values=[0.0, 0.0, 0.0, 1.0])
pc1 = tf.pad(pc1, [[0, 0], [0, 0], [0, 1]], constant_values=1)
pc1 = tf.transpose(pc1, perm=[0, 2, 1])
pc1_tr = tf.linalg.matmul(RT, pc1)
pc1_tr = pc1_tr[:, 0:3]
pc1_tr = tf.transpose(pc1_tr, perm=[0, 2, 1]) # B x WH x 3
# remove zero values
for epoch in range(self.Epochs):
pc2e = pc2[epoch]
print(pc2e)
tree2 = cKDTree(pc2e, leafsize=500, balanced_tree=False)
dist_in, ind = tree2.query(pc1_tr[epoch], k=1)
nonempty = np.count_nonzero(dist_in)
dist_in = np.sum(np.abs(dist_in))
if nonempty != 0:
dist_in = np.divide(dist_in, nonempty)
dist_p2p = dist_in
print(dist_p2p)
return dist_p2p
versions:
Tensorflow 2.3.0
Scipy 1.4.1
pillow==8.2.0
Input of the function is a point cloud with this shape: Batch x Time x W x H x Channels
and the size of pc2e is shape=(46080, 3)
My goal is to get joint probability (here we use count for example) matrix from data samples. Now I can get the expected result, but I'm wondering how to optimize it. Here is my implementation:
def Fill2DCountTable(arraysList):
'''
:param arraysList: List of arrays, length=2
each array is of shape (k, sampleSize),
k == 1 (or None. numpy will align it) if it's single variable
else k for a set of variables of size k
:return: xyJointCounts, xMarginalCounts, yMarginalCounts
'''
jointUniques, jointCounts = np.unique(np.vstack(arraysList), axis=1, return_counts=True)
_, xReverseIndexs = np.unique(jointUniques[[0]], axis=1, return_inverse=True) ###HIGHLIGHT###
_, yReverseIndexs = np.unique(jointUniques[[1]], axis=1, return_inverse=True)
xyJointCounts = np.zeros((xReverseIndexs.max() + 1, yReverseIndexs.max() + 1), dtype=np.int32)
xyJointCounts[tuple(np.vstack([xReverseIndexs, yReverseIndexs]))] = jointCounts
xMarginalCounts = np.sum(xyJointCounts, axis=1) ###HIGHLIGHT###
yMarginalCounts = np.sum(xyJointCounts, axis=0)
return xyJointCounts, xMarginalCounts, yMarginalCounts
def Fill3DCountTable(arraysList):
# :param arraysList: List of arrays, length=3
jointUniques, jointCounts = np.unique(np.vstack(arraysList), axis=1, return_counts=True)
_, xReverseIndexs = np.unique(jointUniques[[0]], axis=1, return_inverse=True)
_, yReverseIndexs = np.unique(jointUniques[[1]], axis=1, return_inverse=True)
_, SReverseIndexs = np.unique(jointUniques[2:], axis=1, return_inverse=True)
SxyJointCounts = np.zeros((SReverseIndexs.max() + 1, xReverseIndexs.max() + 1, yReverseIndexs.max() + 1), dtype=np.int32)
SxyJointCounts[tuple(np.vstack([SReverseIndexs, xReverseIndexs, yReverseIndexs]))] = jointCounts
SMarginalCounts = np.sum(SxyJointCounts, axis=(1, 2))
SxJointCounts = np.sum(SxyJointCounts, axis=2)
SyJointCounts = np.sum(SxyJointCounts, axis=1)
return SxyJointCounts, SMarginalCounts, SxJointCounts, SyJointCounts
My use scenario is to do conditional independence test over variables. SampleSize is usually quite big (~10k) and each variable's categorical cardinality is relatively small (~10). I still find the speed not satisfying.
How to best optimize this code, or even logic outside the code? I may have some thoughts:
The ###HIGHLIGHT### lines. On a single X I may calculate (X;Y1), (Y2;X), (X;Y3|S1)... for many times, so what if I save cache variable's (and conditional set's) {uniqueValue: reversedIndex} dictionary and its marginal count, and then directly get marginalCounts (no need to sum) and replace to get reverseIndexs (no need to unique).
How to further use matrix parallelization to do CITest in batch, i.e. calculate (X;Y|S1), (X;Y|S2), (X;Y|S3)... simultaneously?
Will torch be faster than numpy, on same CPU? Or on GPU?
It's an open question. Thank you for any possible ideas. Big thanks for your help :)
================== A test example is as follows ==================
xs = np.array( [2, 4, 2, 3, 3, 1, 3, 1, 2, 1] )
ys = np.array( [5, 5, 5, 4, 4, 4, 4, 4, 6, 5] )
Ss = np.array([ [1, 0, 0, 0, 1, 0, 0, 0, 1, 1],
[1, 1, 1, 0, 1, 0, 1, 0, 1, 0] ])
xyJointCounts, xMarginalCounts, yMarginalCounts = Fill2DCountTable([xs, ys])
SxyJointCounts, SMarginalCounts, SxJointCounts, SyJointCounts = Fill3DCountTable([xs, ys, Ss])
get 2D from (X;Y): xMarginalCounts=[3 3 3 1], yMarginalCounts=[5 4 1], and xyJointCounts (added axes name FYI):
xy| 4 5 6
--|-------
1 | 2 1 1
2 | 0 2 1
3 | 3 0 0
4 | 0 1 0
get 3D from (X;Y|{Z1,Z2}): SxyJointCounts is of shape 4x4x3, where the first 4 means the cardinality of {Z1,Z2} (00, 01, 10, 11 with respective SMarginalCounts=[3 3 1 3]). SxJointCounts is of shape 4x4 and SyJointCounts is of shape 4x3.
I have a system given by this recursive relationship: xt = At xt-1 + bt. I wish to compute xt for all t, with At, bt and x0 given. Is there are built-in function for that? If I use a loop it would be extremely slow. Thanks!
There is sort of a way. Let's say you have your A matrices in a 3D tensor with shape (T, N, N), where T is the total number of time steps and N is the size of your vector. Similarly, B values are in a 2D tensor (T, N). The first step in the computation would be:
x1 = A[0] # x0 + B[0]
Where # represents matrix product. But you can convert this into a single matrix product. Suppose we add a value 1 at the end of x0, and we call that x0p (for prime):
x0p = tf.concat([x, [1]], axis=0)
And now we build a new 3D tensor Ap with shape (T, N+1, N+1), such that for each A[i] we concatenate B[i] as a new column, and then we add a row with N zeros and a single one at the end:
AwithB = tf.concat([tf.concat([A, tf.expand_dims(B, 2)], axis=2)], axis=1)
AnewRow = tf.concat([tf.zeros((T, 1, N), A.dtype), tf.ones((T, 1, 1), A.dtype)], axis=2)
Ap = tf.concat([AwithB, AnewRow], axis=1)
As it turns out, you can now say:
x1p = Ap[0] # x0p
And therefore:
x2p = Ap[1] # x1p = Ap[1] # Ap[0] # x0p
So we just need to compute all the matrix product of all matrices in Ap across the first dimension. Unfortunately, there does not seem to be a direct operation to compute that with TensorFlow, but you can do it relatively fast with tf.scan:
Ap_prod = tf.scan(tf.matmul, Ap)[-1]
And with that you just have to do:
xtp = Ap_prod # x0p
Here is a proof of concept (the code is tweaked to support single examples and batches, either in the A and B values or in the x)
import tensorflow as tf
def compute_state(a, b, x):
s = tf.shape(a)
t = s[-3]
n = s[-1]
# Add final 1 to x
xp = tf.concat([x, tf.ones_like(x[..., :1])], axis=-1)
# Add B column to A
a_b = tf.concat([tf.concat([a, tf.expand_dims(b, axis=-1)], axis=-1)], axis=-2)
# Make new final row for A
a_row = tf.concat([tf.zeros_like(a[..., :1, :]),
tf.ones_like(a[..., :1, :1])], axis=-1)
# Add new row to A
ap = tf.concat([a_b, a_row], axis=-2)
# Compute matrix product reduction
ap_prod = tf.scan(tf.matmul, ap)[..., -1, :, :]
# Compute final result
outp = tf.linalg.matvec(ap_prod, xp)
return outp[..., :-1]
#Test
tf.random.set_seed(0)
a = tf.random.uniform((10, 5, 5), -1, 1)
b = tf.random.uniform((10, 5), -1, 1)
x = tf.random.uniform((5,), -1, 1)
y = compute_state(a, b, x)
# Also works with batches of (a, b) or x
a = tf.random.uniform((100, 10, 5, 5), -1, 1)
b = tf.random.uniform((100, 10, 5), -1, 1)
x = tf.random.uniform((100, 5), -1, 1)
y = compute_state(a, b, x)
I'm totally new on tensorflow, and I just want to implement a kind of selection function by using matrices multiplication.
example below:
#input:
I = [[9.6, 4.1, 3.2]]
#selection:(single "1" value , and the other are "0s")
s = tf.transpose(tf.Variable([[a, b, c]]))
e.g. s could be [[0, 1, 0]] or [[0, 0, 1]] or [[1, 0, 0]]
#result:(multiplication)
o = tf.matul(I, s)
sorry for the poor expression,
I intend to find the 'solution' in distribution functions with different means and sigmas. (value range from 0 to 1).
so now, i have three variable i, j, index.
value1 = np.exp(-((index - m1[i]) ** 2.) / s1[i]** 2.)
value2 = np.exp(-((index - m2[j]) ** 2.) / s2[j]** 2.)
m1 = [1, 3, 5] s = [0.2, 0.4, 0.5]. #first graph
m2 = [3, 5, 7]. s = [0.5, 0.5, 1.0]. #second graph
I want to get the max or optimization of total value
e.g. value1 + value2 = 1+1 = 2 and one of the solutions: i = 2, j=1, index=5
or I could do this in the other module?
I want to solve the following non-linear system of equations.
Notes
the dot between a_k and x represents dot product.
the 0 in the first equation represents 0 vector and 0 in the second equation is scaler 0
all the matrices are sparse if that matters.
Known
K is an n x n (positive definite) matrix
each A_k is a known (symmetric) matrix
each a_k is a known n x 1 vector
N is known (let's say N = 50). But I need a method where I can easily change N.
Unknown (trying to solve for)
x is an n x 1 a vector.
each alpha_k for 1 <= k <= N a scaler
My thinking.
I am thinking of using scipy root to find x and each alpha_k. We essentially have n equations from each row of the first equation and another N equations from the constraint equations to solve for our n + N variables. Therefore we have the required number of equations to have a solution.
I also have a reliable initial guess for x and the alpha_k's.
Toy example.
n = 4
N = 2
K = np.matrix([[0.5, 0, 0, 0], [0, 1, 0, 0],[0,0,1,0], [0,0,0,0.5]])
A_1 = np.matrix([[0.98,0,0.46,0.80],[0,0,0.56,0],[0.93,0.82,0,0.27],[0,0,0,0.23]])
A_2 = np.matrix([[0.23, 0,0,0],[0.03,0.01,0,0],[0,0.32,0,0],[0.62,0,0,0.45]])
a_1 = np.matrix(scipy.rand(4,1))
a_2 = np.matrix(scipy.rand(4,1))
We are trying to solve for
x = [x1, x2, x3, x4] and alpha_1, alpha_2
Questions:
I can actually brute force this toy problem and feed it to the solver. But how do I do I solve this toy problem in such a way that I can extend it easily to the case when I have let's say n=50 and N=50
I will probably have to explicitly compute the Jacobian for larger matrices??.
Can anyone give me any pointers?
I think the scipy.optimize.root approach holds water, but steering clear of the trivial solution might be the real challenge for this system of equations.
In any event, this function uses root to solve the system of equations.
def solver(x0, alpha0, K, A, a):
'''
x0 - nx1 numpy array. Initial guess on x.
alpha0 - nx1 numpy array. Initial guess on alpha.
K - nxn numpy.array.
A - Length N List of nxn numpy.arrays.
a - Length N list of nx1 numpy.arrays.
'''
# Establish the function that produces the rhs of the system of equations.
n = K.shape[0]
N = len(A)
def lhs(x_alpha):
'''
x_alpha is a concatenation of x and alpha.
'''
x = np.ravel(x_alpha[:n])
alpha = np.ravel(x_alpha[n:])
lhs_top = np.ravel(K.dot(x))
for k in xrange(N):
lhs_top += alpha[k]*(np.ravel(np.dot(A[k], x)) + np.ravel(a[k]))
lhs_bottom = [0.5*x.dot(np.ravel(A[k].dot(x))) + np.ravel(a[k]).dot(x)
for k in xrange(N)]
lhs = np.array(lhs_top.tolist() + lhs_bottom)
return lhs
# Solve the system of equations.
x0.shape = (n, 1)
alpha0.shape = (N, 1)
x_alpha_0 = np.vstack((x0, alpha0))
sol = root(lhs, x_alpha_0)
x_alpha_root = sol['x']
# Compute norm of residual.
res = sol['fun']
res_norm = np.linalg.norm(res)
# Break out the x and alpha components.
x_root = x_alpha_root[:n]
alpha_root = x_alpha_root[n:]
return x_root, alpha_root, res_norm
Running on the toy example, however, only produces the trivial solution.
# Toy example.
n = 4
N = 2
K = np.matrix([[0.5, 0, 0, 0], [0, 1, 0, 0],[0,0,1,0], [0,0,0,0.5]])
A_1 = np.matrix([[0.98,0,0.46,0.80],[0,0,0.56,0],[0.93,0.82,0,0.27],
[0,0,0,0.23]])
A_2 = np.matrix([[0.23, 0,0,0],[0.03,0.01,0,0],[0,0.32,0,0],
[0.62,0,0,0.45]])
a_1 = np.matrix(scipy.rand(4,1))
a_2 = np.matrix(scipy.rand(4,1))
A = [A_1, A_2]
a = [a_1, a_2]
x0 = scipy.rand(n, 1)
alpha0 = scipy.rand(N, 1)
print 'x0 =', x0
print 'alpha0 =', alpha0
x_root, alpha_root, res_norm = solver(x0, alpha0, K, A, a)
print 'x_root =', x_root
print 'alpha_root =', alpha_root
print 'res_norm =', res_norm
Output is
x0 = [[ 0.00764503]
[ 0.08058471]
[ 0.88300129]
[ 0.85299622]]
alpha0 = [[ 0.67872815]
[ 0.69693346]]
x_root = [ 9.88131292e-324 -4.94065646e-324 0.00000000e+000
0.00000000e+000]
alpha_root = [ -4.94065646e-324 0.00000000e+000]
res_norm = 0.0