I have three arrays with sizes 5X5 and 5X4 alternatively.
That is, the list of array sizes are
5X5
5X4
5X5
If I use vstack, it shows error because of size incompatibility.
Is there any possibility to use vstack that automatically fills the extra column for second matrix?
Related
My final output should be a 2D grid that contains values for each grid point. Is there a way to implement in TensorFlow, where I can input a number of images and each image correspond to a specific point in a 2D grid? I want my model such that when I input a similar image it should result in detecting that specific grid in a 2D image. I mean that each image input image belongs to a specific area in the output image (which I divided into a grid for simplicity to make it a finite number of locations).
I have 209 cat/noncat images and I am looking to augment my dataset. In order to do so, this is the following code I am using to convert each NumPy array of RGB values to have a grey filter. The problem is I need their dimensions to be the same for my Neural Network to work, but they happen to have different dimensions.The code:
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140])
Normal Image Dimension: (64, 64, 3)
After Applying the Filter:(64,64)
I know that the missing 3 is probably the RGB Value or something,but I cannot find a way to have a "dummy" third dimension that would not affect the actual image. Can someone provide an alternative to the rgb2gray function that maintains the dimension?
The whole point of applying that greyscale filter is to reduce the number of channels from 3 (i.e. R,G and B) down to 1 (i.e. grey).
If you really, really want to get a 3-channel image that looks just the same but takes 3x as much memory, just make all 3 channels equal:
grey = np.dstack((grey, grey, grey))
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [[0.2989, 0.5870, 0.1140],[0.2989, 0.5870, 0.1140],[0.2989, 0.5870, 0.1140]])
I am currently trying to learning deeplearning and numpy. In an example given, after reshaping a test set of 60 128x128 images of carrots by using
`carrots_test.reshape(carrots_test.shape[60],-1)`
The example went on to then add a T to the end. I understand that this means a transpose but why would you transpose this new flattened image.
I understand what it is to flatten an image and why but can't intuitively see why we need to transpose (swap the rows and columns) it
There is no global reason to do it. Your application expects the shape to be (elements, images), not (images, elements). A reshape only adjusts the shape of the buffer. transpose adjusts the strides of the dimensions and compensates by rearranging the shape.
I am trying to create an image with imshow, but the bins in my matrix are not equal.
For example the following matrix
C = [[1,2,2],[2,3,2],[3,2,3]]
is for X = [1,4,8] and for Y = [2,4,9]
I know I can just do xticks and yticks, but I want the axis to be equal..This means that I will need the squares which build the imshow to be in different sizes.
Is it possible?
This seems like a job for pcolormesh.
From When to use imshow over pcolormesh:
Fundamentally, imshow assumes that all data elements in your array are
to be rendered at the same size, whereas pcolormesh/pcolor associates
elements of the data array with rectangular elements whose size may
vary over the rectangular grid.
pcolormesh plots a matrix as cells, and take as argument the x and y coordinates of the cells, which allows you to draw each cell in a different size.
I assume the X and Y of your example data are meant to be the size of the cells. So I converted them in coordinates with:
xSize=[1,4,9]
ySize=[2,4,8]
x=np.append(0,np.cumsum(xSize)) # gives [ 0 1 5 13]
y=np.append(0,np.cumsum(ySize)) # gives [ 0 2 6 15]
Then if you want a similar behavior as imshow, you need to revert the y axis.
c=np.array([[1,2,2],[2,3,2],[3,2,3]])
plt.pcolormesh(x,-y,c)
Which gives us:
I have a dataset with dimensions: (32, 32, 73257) where 32x32 are pixels of a single image.
How do I reshape it to (73257, 1024) so that every image is unrolled in a row?
So far, I did:
self.train_data = self.train_data.reshape(n_training_examples, number_of_pixels*number_of_pixels)
and it looks like I got garbage instead of normal pictures. I am assuming that reshaping was performed across wrong dimension...??
As suggested in the comments, first get every image in a column, then transpose:
self.train_data = self.train_data.reshape(-1, n_training_examples).T
The memory layout of your array will not be changed by any of these operations, so two contiguous pixels of any image will lay 73257 bytes apart (assuming a uint8 image), which may not be the best of options if you want to process your data one image at a time. You will need to time and validate this, but creating a copy of the array may prove advantageous performance-wise:
self.train_data = self.train_data.reshape(-1, n_training_examples).T.copy()