A question about axis in tensorflow.stack (tensorflow= 1.14) - tensorflow

Using tensorflow.stack what does it mean to have axis=-1 ?
I'm using tensorflow==1.14

Using axis=-1 simply means to stack the tensors along the last axis (as per the python list indexing syntax).
Let's take a look at how this looks like using these tensors of shape (2, 2):
>>> x = tf.constant([[1, 2], [3, 4]])
>>> y = tf.constant([[5, 6], [7, 8]])
>>> z = tf.constant([[9, 10], [11, 12]])
The default behavior for tf.stack as described in the documentation is to stack the tensors along the first axis (index 0) resulting in a tensor of shape (3, 2, 2)
>>> tf.stack([x, y, z], axis=0)
<tf.Tensor: shape=(3, 2, 2), dtype=int32, numpy=
array([[[ 1, 2],
[ 3, 4]],
[[ 5, 6],
[ 7, 8]],
[[ 9, 10],
[11, 12]]], dtype=int32)>
Using axis=-1, the three tensors are stacked along the last axis instead, resulting in a tensor of shape (2, 2, 3)
>>> tf.stack([x, y, z], axis=-1)
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1, 5, 9],
[ 2, 6, 10]],
[[ 3, 7, 11],
[ 4, 8, 12]]], dtype=int32)>

Related

shuffling the contents of Tensor

I have a tensor which which is composed of 2D arrays representing audio frames with shape 121*400. I want to shuffle the rows of individual arrays and not the arrays contained in the tensor. Is this possible without iterating through the tensor and shuffling each array.
To shuffle each array you could use this:
def tf_shuffle_second_axis(t):
# Uniquely random along second axis
rnd = tf.argsort(tf.random.uniform(t.shape),axis=1)
# Add batch dimension for gathering
rnd = tf.concat([tf.repeat(tf.range(t.shape[0])[...,tf.newaxis,tf.newaxis],tf.shape(rnd)[1],axis=1),rnd[...,tf.newaxis]],axis=2)
# Return shuffled tensor
return tf.gather_nd(t,rnd,batch_dims=0)
For example
a = tf.reshape(tf.range(16), [4,4])
<tf.Tensor: shape=(4, 4), dtype=int32, numpy=
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]], dtype=int32)>
tf_shuffle_second_axis(a)
Output
<tf.Tensor: shape=(4, 4), dtype=int32, numpy=
array([[ 2, 1, 0, 3],
[ 4, 6, 5, 7],
[ 9, 11, 10, 8],
[14, 12, 13, 15]], dtype=int32)>
UPDATE
To shuffle the whole row use tf.shuffle
tf.random.shuffle(a)
Output
<tf.Tensor: shape=(4, 4), dtype=int32, numpy=
array([[ 8, 9, 10, 11],
[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[12, 13, 14, 15]], dtype=int32)>

Can I select arbitrary windows from the last dimension of a numpy array?

I'd like to write a numpy function that takes an MxN array A, a window length L, and an MxP array idxs of starting indices into the M rows of A that selects P arbitrary slices of length L from each of the M rows of A. Except, I would love for this to work on the last dimension of A, and not necessarily care how many dimensions A has, so all dims of A and idxs match except the last one. Examples:
If A is just 1D:
A = np.array([1, 2, 3, 4, 5, 6])
window_len = 3
idxs = np.array([1, 3])
result = magical_routine(A, idxs, window_len)
Where result is a 2x3 array since I selected 2 slices of len 3:
np.array([[ 2, 3, 4],
[ 4, 5, 6]])
If A is 2D:
A = np.array([[ 1, 2, 3, 4, 5, 6],
[ 7, 8, 9,10,11,12],
[13,14,15,16,17,18]])
window_len = 3
idxs = np.array([[1, 3],
[0, 1],
[2, 2]])
result = magical_routine(A, idxs, window_len)
Where result is a 3x2x3 array since there are 3 rows of A, and I selected 2 slices of len 3 from each row:
np.array([[[ 2, 3, 4], [ 4, 5, 6]],
[[ 7, 8, 9], [ 8, 9,10]],
[[15,16,17], [15,16,17]]])
And so on.
I have discovered an number of inefficient ways to do this, along with ways that work for a specific number of dimensions of A. For 2D, the following is pretty tidy:
col_idxs = np.add.outer(idxs, np.arange(window_len))
np.take_along_axis(A[:, np.newaxis], col_idxs, axis=-1)
I can't see a nice way to generalize this for 1D and other D's though...
Is anyone aware of an efficient way that generalizes to any number of dims?
For your 1d case
In [271]: A=np.arange(1,7)
In [272]: idxs = np.array([1,3])
Using the kind of iteration that this questions usually gets:
In [273]: np.vstack([A[i:i+3] for i in idxs])
Out[273]:
array([[2, 3, 4],
[4, 5, 6]])
Alternatively generate all indices, and one indexing. linspace is handy for this (though it's not the only option):
In [278]: j = np.linspace(idxs,idxs+3,3,endpoint=False)
In [279]: j
Out[279]:
array([[1., 3.],
[2., 4.],
[3., 5.]])
In [282]: A[j.T.astype(int)]
Out[282]:
array([[2, 3, 4],
[4, 5, 6]])
for the 2d
In [284]: B
Out[284]:
array([[ 1, 2, 3, 4, 5, 6],
[ 7, 8, 9, 10, 11, 12],
[13, 14, 15, 16, 17, 18]])
In [285]: idxs = np.array([[1, 3],
...: [0, 1],
...: [2, 2]])
In [286]: j = np.linspace(idxs,idxs+3,3,endpoint=False)
In [287]: j
Out[287]:
array([[[1., 3.],
[0., 1.],
[2., 2.]],
[[2., 4.],
[1., 2.],
[3., 3.]],
[[3., 5.],
[2., 3.],
[4., 4.]]])
With a bit of trial and error, pair up the indices to get:
In [292]: B[np.arange(3)[:,None,None],j.astype(int).transpose(1,2,0)]
Out[292]:
array([[[ 2, 3, 4],
[ 4, 5, 6]],
[[ 7, 8, 9],
[ 8, 9, 10]],
[[15, 16, 17],
[15, 16, 17]]])
Or iterate as in the first case, but with an extra layer:
In [294]: np.array([[B[j,i:i+3] for i in idxs[j]] for j in range(3)])
Out[294]:
array([[[ 2, 3, 4],
[ 4, 5, 6]],
[[ 7, 8, 9],
[ 8, 9, 10]],
[[15, 16, 17],
[15, 16, 17]]])
With sliding windows:
In [295]: aa = np.lib.stride_tricks.sliding_window_view(A,3)
In [296]: aa.shape
Out[296]: (4, 3)
In [297]: aa
Out[297]:
array([[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6]])
In [298]: aa[[1,3]]
Out[298]:
array([[2, 3, 4],
[4, 5, 6]])
and
In [300]: bb = np.lib.stride_tricks.sliding_window_view(B,(1,3))
In [301]: bb.shape
Out[301]: (3, 4, 1, 3)
In [302]: bb[np.arange(3)[:,None],idxs,0,:]
Out[302]:
array([[[ 2, 3, 4],
[ 4, 5, 6]],
[[ 7, 8, 9],
[ 8, 9, 10]],
[[15, 16, 17],
[15, 16, 17]]])
I got it! I was almost there:
def magical_routine(A, idxs, window_len=2000):
col_idxs = np.add.outer(idxs, np.arange(window_len))
return np.take_along_axis(A[..., np.newaxis, :], col_idxs, axis=-1)
I just needed to always add the new axis to A's second to last dim, and then leave remaining axes alone.

Tensorflow 2 - tf.slice and its NumPy slice syntax incompatible behavior

Question
Please confirm if the below is as designed and expected, or an issue of tf. slice, or a mistake in the usage of tf. slice. If a mistake, kindly suggest how to correct it.
Background
Introduction to tensor slicing - Extract tensor slices says Numpy-like slice syntax is an alternative of tf. slice.
Perform NumPy-like tensor slicing using tf. slice.
t1 = tf.constant([0, 1, 2, 3, 4, 5, 6, 7])
print(tf.slice(t1,
begin=[1],
size=[3]))
Alternatively, you can use a more Pythonic syntax. Note that tensor slices are evenly spaced over a start-stop range.
print(t1[1:4])
Problem
To Update the dark orange region.
TYPE = tf.int32
N = 4
D = 5
shape = (N,D)
# Target to update
Y = tf.Variable(
initial_value=tf.reshape(tf.range(N*D,dtype=TYPE), shape=shape),
trainable=True
)
print(f"Target Y: \n{Y}\n")
---
Target Y:
<tf.Variable 'Variable:0' shape=(4, 5) dtype=int32, numpy=
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]], dtype=int32)>
tf. slice does not work.
# --------------------------------------------------------------------------------
# Slice region in the target to be updated
# --------------------------------------------------------------------------------
S = tf.slice( # Error "EagerTensor' object has no attribute 'assign'"
Y,
begin=[0,1], # Coordinate (n,d) as the start point
size=[3,2] # Shape (3,2) -> (n+3, n+2) as the end point
)
print(f"Slice to update S: \n{S}\n")
# Values to set
V = tf.ones(shape=tf.shape(S), dtype=TYPE)
print(f"Values to set V: \n{V}\n")
# Assing V to S region of T
S.assign(V)
---
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-17-e5692b1750c8> in <module>
24
25 # Assing V to S region of T
---> 26 S.assign(V)
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'assign'
However, slice syntax works.
S = Y[
0:3, # From coordinate (n=0,d), slice rows (0,1,2) or 'size'=3 -> shape (3,?)
1:3 # From coordinate (n=0,d=1), slice columns (1,2) or 'size'=2 -> shape (3,2)
]
print(f"Slice to update S: \n{S}\n")
# Values to set
V = tf.ones(shape=tf.shape(S), dtype=TYPE)
print(f"Values to set V: \n{V}\n")
# Assing V to S region of T
S.assign(V)
---
<tf.Variable 'UnreadVariable' shape=(4, 5) dtype=int32, numpy=
array([[ 0, 1, 1, 3, 4],
[ 5, 1, 1, 8, 9],
[10, 1, 1, 13, 14],
[15, 16, 17, 18, 19]], dtype=int32)>
In my understanding, the above behavior is expected or not a bug at least. As the error said, there is no attribute called assign in tf. Tensor (EagerTensor for eager execution) but there is in tf. Variable. And generally, tf. slice returns a tensor as its output and thus it doesn't possess assign attribute.
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'assign'
But when we do np like slicing and use it to modify the original tf. Variable, it seamlessly works.
Possible Solution
A workaround is to use tf.strided_slice instead of tf.slice. If we follow the source code of it, we will see, it takes the var argument which is a variable corresponding to input_
#tf_export("strided_slice")
#dispatch.add_dispatch_support
def strided_slice(input_,
begin,
end,
..........
var=None,
name=None):
And when we pass a parameter for var that basically corresponding to the input_, it then calls assign function that is defined within it
def assign(val, name=None):
"""Closure that holds all the arguments to create an assignment."""
if var is None:
raise ValueError("Sliced assignment is only supported for variables")
else:
if name is None:
name = parent_name + "_assign"
return var._strided_slice_assign(
begin=begin,
end=end,
strides=strides,
value=val,
name=name,
begin_mask=begin_mask,
end_mask=end_mask,
ellipsis_mask=ellipsis_mask,
new_axis_mask=new_axis_mask,
shrink_axis_mask=shrink_axis_mask)
So, when we pass var in the tf.strided_slice, it will return an assignable object.
Code
Here is the full working code for reference.
import tensorflow as tf
print(tf.__version__)
TYPE = tf.int32
N = 4
D = 5
shape = (N,D)
# Target to update
Y = tf.Variable(
initial_value=tf.reshape(tf.range(N*D,dtype=TYPE), shape=shape),
trainable=True
)
Y
2.4.1
<tf.Variable 'Variable:0' shape=(4, 5) dtype=int32, numpy=
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]], dtype=int32)>
Now, we use tf.stried_slice instead of tf.slice.
S = tf.strided_slice(
Y,
begin = [0, 1],
end = [3, 3],
var = Y,
name ='slice_op'
)
S
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 1, 2],
[ 6, 7],
[11, 12]], dtype=int32)>
Update the variables with no attribution error.
# Values to set
V = tf.ones(shape=tf.shape(S), dtype=TYPE)
print(V)
print()
# Assing V to S region of T
S.assign(V)
tf.Tensor(
[[1 1]
[1 1]
[1 1]], shape=(3, 2), dtype=int32)
<tf.Variable 'UnreadVariable' shape=(4, 5) dtype=int32, numpy=
array([[ 0, 1, 1, 3, 4],
[ 5, 1, 1, 8, 9],
[10, 1, 1, 13, 14],
[15, 16, 17, 18, 19]], dtype=int32)>
Using np like slicing.
# slicing
S = Y[
0:3,
1:3
]
S
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 1, 2],
[ 6, 7],
[11, 12]], dtype=int32)>
# Values to set
V = tf.ones(shape=tf.shape(S), dtype=TYPE)
print(V)
# Assing V to S region of T
S.assign(V)
tf.Tensor(
[[1 1]
[1 1]
[1 1]], shape=(3, 2), dtype=int32)
<tf.Variable 'UnreadVariable' shape=(4, 5) dtype=int32, numpy=
array([[ 0, 1, 1, 3, 4],
[ 5, 1, 1, 8, 9],
[10, 1, 1, 13, 14],
[15, 16, 17, 18, 19]], dtype=int32)>
Materials
tf.Variable - tf.Tensor.

tensorflow 2 - how to directly update elements in tf.Variable X at indices?

Is there a way to directly update the elements in tf.Variable X at indices without creating a new tensor having the same shape as X?
tf.tensor_scatter_nd_update create a new tensor hence it appears not updateing the original tf.Variable.
This operation creates a new tensor by applying sparse updates to the input tensor.
tf.Variable assign apparently needs a new tensor value which has the same shape of X to update the tf.Variable X.
assign(
value, use_locking=False, name=None, read_value=True
)
value A Tensor. The new value for this variable.
About the tf.tensor_scatter_nd_update, you're right that it returns a new tf.tensor (and not tf.Variable). But about the assign which is an attribute of tf.Variable, I think you somewhat misread the document; the value is just the new item that you want to assign in particular indices of your old variable.
AFAIK, in tensorflow all tensors are immutable like python numbers and strings; you can never update the contents of a tensor, only create a new one, source. And directly updating or manipulating of tf.tensor or tf.Variable such as numpy like item assignment is still not supported. Check the following Github issues to follow up the discussions: #33131, #14132.
In numpy, we can do an in-place item assignment that you showed in the comment box.
import numpy as np
a = np.array([1,2,3])
print(a) # [1 2 3]
a[1] = 0
print(a) # [1 0 3]
A similar result can be achieved in tf.Variable with assign attribute.
import tensorflow as tf
b = tf.Variable([1,2,3])
b.numpy() # array([1, 2, 3], dtype=int32)
b[1].assign(0)
b.numpy() # array([1, 0, 3], dtype=int32)
Later, we can convert it to tf. tensor as follows.
b_ten = tf.convert_to_tensor(b)
b_ten.numpy() # array([1, 0, 3], dtype=int32)
We can do such item assignment in tf.tensor too but we need to convert it to tf.Variable first, (I know, not very intuitive).
tensor = [[1, 1], [1, 1], [1, 1]] # tf.rank(tensor) == 2
indices = [[0, 1], [2, 0]] # num_updates == 2, index_depth == 2
updates = [5, 10] # num_updates == 2
x = tf.tensor_scatter_nd_update(tensor, indices, updates)
x
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 1, 5],
[ 1, 1],
[10, 1]], dtype=int32)>
x = tf.Variable(x)
x
<tf.Variable 'Variable:0' shape=(3, 2) dtype=int32, numpy=
array([[ 1, 5],
[ 1, 1],
[10, 1]], dtype=int32)>
x[0].assign([5,1])
x
<tf.Variable 'Variable:0' shape=(3, 2) dtype=int32, numpy=
array([[ 5, 1],
[ 1, 1],
[10, 1]], dtype=int32)>
x = tf.convert_to_tensor(x)
x
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 5, 1],
[ 1, 1],
[10, 1]], dtype=int32)>

Matrix multiplication of two vectors

I'm trying to do a matrix multiplication of two vectors in numpy which would result in an array.
Example
In [108]: b = array([[1],[2],[3],[4]])
In [109]: a =array([1,2,3])
In [111]: b.shape
Out[111]: (4, 1)
In [112]: a.shape
Out[112]: (3,)
In [113]: b.dot(a)
ValueError: objects are not aligned
As can be seen from the shapes, the array a isn't actually a matrix. The catch is to define a like this.
In [114]: a =array([[1,2,3]])
In [115]: a.shape
Out[115]: (1, 3)
In [116]: b.dot(a)
Out[116]:
array([[ 1, 2, 3],
[ 2, 4, 6],
[ 3, 6, 9],
[ 4, 8, 12]])
How to achieve the same result when acquiring the vectors as fields or columns of a matrix?
In [137]: mat = array([[ 1, 2, 3],
[ 2, 4, 6],
[ 3, 6, 9],
[ 4, 8, 12]])
In [138]: x = mat[:,0] #[1,2,3,4]
In [139]: y = mat[0,:] #[1,2,3]
In [140]: x.dot(y)
ValueError: objects are not aligned
You are computing the outer product of two vectors. You can use the function numpy.outer for this:
In [18]: a
Out[18]: array([1, 2, 3])
In [19]: b
Out[19]: array([10, 20, 30, 40])
In [20]: numpy.outer(b, a)
Out[20]:
array([[ 10, 20, 30],
[ 20, 40, 60],
[ 30, 60, 90],
[ 40, 80, 120]])
Use 2d arrays instead of 1d vectors and broadcasting with the * ...
In [8]: #your code from above
In [9]: y = mat[0:1,:]
In [10]: y
Out[10]: array([[1, 2, 3]])
In [11]: x = mat[:,0:1]
In [12]: x
Out[12]:
array([[1],
[2],
[3],
[4]])
In [13]: x*y
Out[13]:
array([[ 1, 2, 3],
[ 2, 4, 6],
[ 3, 6, 9],
[ 4, 8, 12]])
It's the similar catch as in the basic example.
Both x and y aren't perceived as matrices but as single dimensional arrays.
In [143]: x.shape
Out[143]: (4,)
In [144]: y.shape
Out[144]: (3,)
We have to add the second dimension to them, which will be 1.
In [171]: x = array([x]).transpose()
In [172]: x.shape
Out[172]: (4, 1)
In [173]: y = array([y])
In [174]: y.shape
Out[174]: (1, 3)
In [175]: x.dot(y)
Out[175]:
array([[ 1, 2, 3],
[ 2, 4, 6],
[ 3, 6, 9],
[ 4, 8, 12]])