python shape too large to be a matrix - numpy

I'm use python keras to build a cnn model.
I follow cnn mnist example and modify to my code.
This is the example I found
# Read MNIST data
(X_Train, y_Train), (X_Test, y_Test) = mnist.load_data()
# Translation of data
X_Train40 = X_Train.reshape(X_Train.shape[0], 28, 28, 1).astype('float32')
X_Test40 = X_Test.reshape(X_Test.shape[0], 28, 28, 1).astype('float32')
My data has 30222 rows and 6 columns of csv.
Which is 10074 data each data is 3 * 6 size for one block of information.
For example, the 1 ~ 3row of the matrix is one block of information.
Then I changed the format of my data.
X_Train40 = X_Train.reshape(10074, 3, 6, 1)
X_Test40 = X_Test.reshape(4319, 3, 6, 1)
Then this error occurs.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-133-4f23172d450a> in <module>()
----> 1 X_Train40 = X_Train.reshape(10074, 3, 6, 1)
2 X_Test40 = X_Test.reshape(4319, 3, 6, 1)
~\Anaconda3\lib\site-packages\numpy\matrixlib\defmatrix.py in __array_finalize__(self, obj)
269 return
270 elif (ndim > 2):
--> 271 raise ValueError("shape too large to be a matrix.")
272 else:
273 newshape = self.shape
ValueError: shape too large to be a matrix.

Just guessing, but since the data comes from a csv file, it was converted to np.matrix, which have the restriction to be 2-dimensional.
Internally numpy will try to keep the dimensions of the matrix, so to reshape to higher dimensions, you will need to convert it to a ndarray like this:
X_Train = np.array(X_Train)
X_Test = np.array(X_Test)
X_Train40 = X_Train.reshape(10074, 3, 6, 1)
X_Test40 = X_Test.reshape(4319, 3, 6, 1)

You know... I had this problem today and I could not find a single answer that helped.
Your problem is probably solved at this point, but one thing to look at when python complains about "shape too large to be a matrix" is the type of your variable, namely, is it a numpy.matrix data type or a numpy.ndarray?
If it is the former, then you are in troubles.
Try to avoid numpy.matrix type, specially if you want to do any linear algebra operations, or stack them (with (d/v/h)stack, etc) and stick to numpy.ndarray

Related

take samples from a tuple in tensorflow dataset

I have a time series features (numpy array) and a labels (same length in time steps and also numpy array). I used tf.keras.preprocessing.timeseries_dataset_from_array() to get tuples of (feature(batched), labels(batched)).
My issue is that I want to split it into training and validation (test is already separate). I want it shuffled and windowed, so I cannot do it before the timeseries_dataset_from_array() method and after, I have a tuple. Can I do:
(train_features, train_labels) = (output_tuple[0].take(num_training_sample), output_tuple[1].take(num_training_sample))
or does this change the order of features and labels?
Update:
What I have is the following
#feature of size (2757698, 21, 4, 1) time series array
#labels of size (2757698, 3)
w = (timeseries_dataset_from_array(
features[:train_size, :, :],
self.target[:train_size, :], self.WINDOW_SIZE, sequence_stride=1, sampling_rate=1,
batch_size=batch_size, shuffle=True, start_index=start,
end_index=end))
# output is a tuple as a batch of size ((256, 60, 21, 4, 1), (256, 3)), 60 is the window size
Output is a generator that you can feed to your model.fit()!
What I need is for this train_dataset to split to train and validation, after shuffle. As you can see, I already picked the test data from the end which is the most recent.

Separating custom keras metric inputs into two seperate metrics and finding median error

I have a ResNet network that I am using for a camera pose network. I have replaced the final classifier layer with a 1024 dense layer and then a 7 dense layer (first 3 for xyz, final 4 for quaternion).
My problem is that I want to record the xyz error and the quaternion error as two separate errors or metrics (Instead of just mean absolute error of all 7). The inputs of the custom metric template of customer_error(y_true,y_pred) are tensors. I don't know how to separate the inputs into two different xyz and q arrays. The function runs at compile time, when the tensors are empty and don't have any numpy components.
Ultimately I want to get the median xyz and q error using
median = tensorflow_probability.stats.percentile(input,q=50, interpolation='linear').
Any help would be really appreciated.
You could use the tf.slice() to extract just the first three elements of your model output.
import tensorflow as tf
# enabling eager mode to demo the slice fn
tf.compat.v1.enable_eager_execution()
import numpy as np
# just creating a random array dimesions size (2, 7)
# where 2 is just an arbitrary value chosen for the batch dimension
out = np.arange(0,14).reshape(2,7)
print(out)
# array([[ 0, 1, 2, 3, 4, 5, 6],
# [ 7, 8, 9, 10, 11, 12, 13]])
# put it in a tf variable
out_tf = tf.Variable(out)
# now using the slice operator
xyz = tf.slice(out_tf, begin=[0, 0], size=[-1,3])
# lets see what it looked like
print(xyz)
# <tf.Tensor: id=11, shape=(2, 3), dtype=int64, numpy=
# array([[0, 1, 2],
# [7, 8, 9]])>
Could wrap something like this into your custom metric to get what you need.
def xyz_median(y_true, y_pred):
"""get the median of just the X,Y,Z coords
UNTESTED though :)
"""
# slice to get just the xyz
xyz = tf.slice(out_tf, begin=[0, 0], size=[-1,3])
median = tfp.stats.percentile(xyz, q=50, interpolation='linear')
return median

Cleaner way to whiten each image in a batch using keras

I would like to whiten each image in a batch. The code I have to do so is this:
def whiten(self, x):
shape = x.shape
x = K.batch_flatten(x)
mn = K.mean(x, 0)
std = K.std(x, 0) + K.epsilon()
r = (x - mn) / std
r = K.reshape(x, (-1,shape[1],shape[2],shape[3]))
return r
#
where x is (?, 320,320,1). I am not keen on the reshape function with a -1 arg. Is there a cleaner way to do this?
Let's see what the -1 does. From the Tensorflow documentation (Because the documentation from Keras is scarce compared to the one from Tensorflow):
If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant.
So what this means:
from keras import backend as K
X = tf.constant([1,2,3,4,5])
K.reshape(X, [-1, 5])
# Add one more dimension, the number of columns should be 5, and keep the number of elements to be constant
# [[1 2 3 4 5]]
X = tf.constant([1,2,3,4,5,6])
K.reshape(X, [-1, 3])
# Add one more dimension, the number of columns should be 3
# For the number of elements to be constant the number of rows should be 2
# [[1 2 3]
# [4 5 6]]
I think it is simple enough. So what happens in your code:
# Let's assume we have 5 images, 320x320 with 3 channels
X = tf.ones((5, 320, 320, 3))
shape = X.shape
# Let's flat the tensor so we can perform the rest of the computation
flatten = K.batch_flatten(X)
# What this did is: Turn a nD tensor into a 2D tensor with same 0th dimension. (Taken from the documentation directly, let's see that below)
flatten.shape
# (5, 307200)
# So all the other elements were squeezed in 1 dimension while keeping the batch_size the same
# ...The rest of the stuff in your code is executed here...
# So we did all we wanted and now we want to revert the tensor in the shape it had previously
r = K.reshape(flatten, (-1, shape[1],shape[2],shape[3]))
r.shape
# (5, 320, 320, 3)
Besides, I can't think of a cleaner way to do what you want to do. If you ask me, your code is already clear enough.

how tf.space_to_depth() works in tensorflow?

I am a pytorch user. I have got a pretrained model in tensorflow and I would like to transfer it into pytorch. In one part of model architecture, I mean in tensorflow-defined model, there is a function tf.space_to_depth which transfers an input size of (None, 38,38,64) to (None, 19,19, 256). (https://www.tensorflow.org/api_docs/python/tf/space_to_depth) is the doc of this function. But I could not understand what this function actually do. Could you please provide some numpy codes to illustrate it for me?
Actually I would like to make an exact similar layer in pytorch.
Some codes in tensorflow reveals another secret:
Here is some codes:
import numpy as np
import tensorflow as tf
norm = tf.random_normal([1, 2, 2, 1], mean=0, stddev=1)
trans = tf.space_to_depth(norm,2)
with tf.Session() as s:
norm = s.run(norm)
trans = s.run(trans)
print("Norm")
print(norm.shape)
for index,value in np.ndenumerate(norm):
print(value)
print("Trans")
print(trans.shape)
for index,value in np.ndenumerate(trans):
print(value)
And here is the output:
Norm
(1, 2, 2, 1)
0.695261
0.455764
1.04699
-0.237587
Trans
(1, 1, 1, 4)
1.01139
0.898777
0.210135
2.36742
As you can see above, In Addition to data reshaping, the tensor values has changed!
This tf.space_to_depth divides your input into blocs and concatenates them.
In your example the input is 38x38x64 (and I guess the block_size is 2). So the function divides your input into 4 (block_size x block_size) and concatenates them which gives your 19x19x256 output.
You just need to divide each of your channel (input) into block_size*block_size patches (each patch has a size of width/block_size x height/block_size) and concatenate all of these patches. Should be pretty straightforward with numpy.
Hope it helps.
Conclusion: tf.space_to_depth() only outputs a copy of the input tensor where values from the height and width dimensions are moved to the depth dimension.
If you modify your code a little bit, like this
norm = tf.random_normal([1, 2, 2, 1], mean=0, stddev=1)
with tf.Session() as s:
norm = s.run(norm)
trans = tf.space_to_depth(norm,2)
with tf.Session() as s:
trans = s.run(trans)
Then you will have the following results:
Norm
(1, 2, 2, 1)
-0.130227
2.04587
-0.077691
-0.112031
Trans
(1, 1, 1, 4)
-0.130227
2.04587
-0.077691
-0.112031
Hope this can help you.
A good reference for PyTorch is the implementation of the PixelShuffle module here. This shows the implementation of something equivalent to Tensorflow's depth_to_space. Based on that we can implement pixel_shuffle with a scaling factor less than 1 which would be like space_to_depth. E.g., downscale_factor=0.5 is like space_to_depth with block_size=2.
def pixel_shuffle_down(input, downscale_factor):
batch_size, channels, in_height, in_width = input.size()
out_channels = channels / (downscale_factor ** 2)
block_size = 1 / downscale_factor
out_height = in_height * downscale_factor
out_width = in_width * downscale_factor
input_view = input.contiguous().view(
batch_size, channels, out_height, block_size, out_width, block_size)
shuffle_out = input_view.permute(0, 1, 3, 5, 2, 4).contiguous()
return shuffle_out.view(batch_size, out_channels, out_height, out_width)
Note: I haven't verified this implementation yet and I'm not sure if it's exactly the inverse of pixel_shuffle but this is the basic idea. I've also opened an issue on the PyTorch Github about this here. In NumPy the equivalent code would use reshapeand transpose instead of view and permute respectively.
Using split and stack functions along with permute in Pytorch gives us the same result as space_to_depth in tensorflow does. Here is the code in Pytorch.
Assume that input is in BHWC format.
Based on block_size and input shape, we can caculate the output shape.
First, it splits the input on the "width" dimension or dimension #2 by block_size. The result of this operation is an array of length d_width. It's just like you cut a cake (by block_size) into d_width pieces.
Then for each piece, you reshape it so it has correct output height and output depth (channel). Finally, we stack those pieces together and perform a permutation.
Hope it helps.
def space_to_depth(input, block_size)
block_size_sq = block_size*block_size
(batch_size, s_height, s_width, s_depth) = input.size()
d_depth = s_depth * self.block_size_sq
d_width = int(s_width / self.block_size)
d_height = int(s_height / self.block_size)
t_1 = input.split(self.block_size, 2)
stack = [t_t.contiguous().view(batch_size, d_height, d_depth) for t_t in t_1]
output = torch.stack(stack, 1)
output = output.permute(0, 2, 1, 3)
return output
maybe this one works:
sudo apt install nvidia-cuda-toolkit
it worked for me.

tensorflow: ValueError: setting an array element with a sequence

I am playing with the fixed code from this question. I am getting the above error. Googling suggests it might be some kind of dimension mismatch, though my diagnostics does not show any:
with tf.Session() as sess:
sess.run(init)
# Fit all training data
for epoch in range(training_epochs):
for (_x_, _y_) in getb(train_X, train_Y):
print("y data raw", _y_.shape )
_y_ = tf.reshape(_y_, [-1, 1])
print( "y data ", _y_.get_shape().as_list())
print("y place holder", yy.get_shape().as_list())
print("x data", _x_.shape )
print("x place holder", xx.get_shape().as_list() )
sess.run(optimizer, feed_dict={xx: _x_, yy: _y_})
Looking at the dimensions, everything is alright:
y data raw (20,)
y data [20, 1]
y place holder [20, 1]
x data (20, 10)
x place holder [20, 10]
Error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-131-00e0bdc140b2> in <module>()
16 print("x place holder", xx.get_shape().as_list() )
17
---> 18 sess.run(optimizer, feed_dict={xx: _x_, yy: _y_})
19
20 # # Display logs per epoch step
/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict)
355 e.args = (e.message,)
356 raise e
--> 357 np_val = np.array(subfeed_val, dtype=subfeed_t.dtype.as_numpy_dtype)
358 if subfeed_t.op.type == 'Placeholder':
359 if not subfeed_t.get_shape().is_compatible_with(np_val.shape):
ValueError: setting an array element with a sequence.
Any debugging tips?
This—not very helpful—error is raised when one of the values in the feed_dict argument to tf.Session.run() is a tf.Tensor object (in this case, the result of tf.reshape()).
The values in feed_dict must be numpy arrays, or some value x that can be implicitly converted to a numpy array using numpy.array(x). tf.Tensor objects cannot be implicitly converted, because doing so might require a lot of work: instead you have to call sess.run(t) to convert a tensor t to a numpy array.
As you noticed in your answer, using np.reshape(_y_, [-1, 1]) works, because it produces a numpy array (and because _y_ is a numpy array to begin with). In general, you should always prepare data to be fed using numpy and other pure-Python operations.
replacing tf reshape with plain numpy one helped:
_y_ = np.reshape(_y_, [-1, 1])
the actual reason why is still unclear, but it works.