There is a function in numpy that inserts given values to the array:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.insert.html
Is there something similar in tensorflow?
Alternatively, is there a function in tensorflow that can do tensor upsampling using zeros in between values of a tensor?
tf.nn.conv2d_transpose can do this upsampling (with careful design of output_shape and strides). A sample code:
import tensorflow as tf
import numpy as np
input = tf.convert_to_tensor(np.ones((1, 20, 20, 1)))
input = tf.cast(input, tf.float32)
b = np.zeros((3, 3, 1, 1))
b[1, 1, 0, 0] = 1
weight = tf.convert_to_tensor(b)
weight = tf.cast(weight, tf.float32)
output = tf.nn.conv2d_transpose(input, weight, output_shape=(1, 40, 40, 1), strides=[1, 2, 2, 1])
sess = tf.Session()
print sess.run(output[0, :, :, 0])
I believe checking its api will help you more.
Related
I am triying to work on a neural network where part of the neurons are related by an explicit parametric equation.
I have tried with the code:
for i in range(npxx-1):
for j in range(npxy-1):
F[:,i,j,0] = a0_tf*U[:,i,j,0] + b0_tf*U[:,i,j,1]
F[:,i,j,1] = a1_tf*U[:,i,j,1] + b1_tf*U[:,i,j,2]
F[:,i,j,2] = a2_tf*U[:,i,j,2] + b2_tf*U[:,i,j,0]
But obviously it doesn't work as tensor assignement is not allowed under TensorFlow framework. What is the most straightforward way to do so? I have tried by defining a function and using the decorator #tf.funcion but this is not available in Tensorflow 1.2.1.
Thanks in advance.
Use tf.roll:
import tensorflow as tf
x = tf.random.uniform([2, 2, 2, 3])
a = tf.constant([0.1, 0.2, 0.3])
b = 1 - a
rolled_x = tf.roll(x, -1, axis=-1)
output = x * tf.reshape(a, [1, 1, 1, 3]) + rolled_x * tf.reshape(b, [1, 1, 1, 3])
I have numpy tuple (with len 4, 5, 6, or more). How can I convert a numpy tuple to a Tensorflow tuple with input like this:
import tensorflow as tf
import numpy as np
a = np.array([[20, 20], [40, 40]], dtype=np.int32)
b = np.array([[20, 20, 20], [40, 40, 40], [60, 60, 60]], dtype=np.int32)
c = np.array([[20, 20], [40, 40]], dtype=np.int32)
d = np.array([[20, 20, 20], [40, 40, 40], [60, 60, 60]], dtype=np.int32)
e = (a, b, c, d) # e is numpy tensor i want convert to tensor
tf_shapes = ((None, 2), (None, 3), (2, 2), (3, 3))
tf_types = (tf.int64, tf.float32, tf.int64, tf.float32)
I must write a generator to convert this to a Tensorflow tuple.
def data_generator():
for i in range(16):
yield a, b, c, d
dataset=tf.data.Dataset.from_generator(data_generator, tf_types, tf_shapes).batch(batch_size=4, drop_remainder=True)
for sample in dataset:
res = model(sample, training=False)
How can I get a sample directly without not using tf.data.Dataset.from_generator?
I'm not sure if I understood your question correctly, but it appears that you just want to have a, b, c, and d converted to tensorflow tensors without having to use the tf.data.Dataset.from_generator function.
In that case, you can simply use tf.convert_to_tensor:
import tensorflow as tf
import numpy as np
a_tensor = tf.convert_to_tensor(a, np.int32)
b_tensor = tf.convert_to_tensor(b, np.int32)
c_tensor = tf.convert_to_tensor(c, np.int32)
d_tensor = tf.convert_to_tensor(d, np.int32)
# use the tensors however you want
Additionaly, if you want to have a tensor that is similar to e in your code, then do:
e_tensor = tf.stack(e, axis=0)
# e_tensor[0] == a_tensor, e_tensor[1] == b_tensor, ...
now I'm learning TensorFlow, I wonder why numpy.swapaxes(0,3) required.
I know that result is (1, 14, 14, 5) means [ 15element[ 145element[ 145element[ 5element ] ] ] ]
and after bumpy.swapaxes(3,0) -> (5, 14, 14, 1) and 5 images.
below is my code, please save my question. thank you.
#load mnist data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
#get only 1 image & reshape it
img = mnist.train.images[0].reshape(28,28)
plt.imshow(img, cmap='gray')
sess = tf.InteractiveSession()
#reshape image to get color = 1
img = img.reshape(-1,28,28,1)
#filter 3X3, count = 5
W1 = tf.Variable(tf.random_normal([3, 3, 1, 5], stddev=0.01))
#zero-padded USE
conv2d = tf.nn.conv2d(img, W1, strides=[1, 2, 2, 1], padding='SAME')
print(conv2d)
sess.run(tf.global_variables_initializer())
#make convoultion data
conv2d_img = conv2d.eval()
#print converted images
conv2d_img = np.swapaxes(conv2d_img, 0, 3)
for i, one_img in enumerate(conv2d_img):
plt.subplot(1,5,i+1), plt.imshow(one_img.reshape(14,14), cmap='gray')
#pooling
pool = tf.nn.max_pool(conv2d, ksize=[1, 2, 2, 1], strides=[
1, 2, 2, 1], padding='SAME')
print(pool)
sess.run(tf.global_variables_initializer())
pool_img = pool.eval()
#print pooling image
pool_img = np.swapaxes(pool_img, 0, 3)
for i, one_img in enumerate(pool_img):
plt.subplot(1,5,i+1), plt.imshow(one_img.reshape(7, 7), cmap='gray')
The swapping is necessary because it changes the order of the image channel.
By default, TensorFlow uses NHWC, where C = 1 since we have a grayscale image.
Therefore, you need the number of channels (1 for a grayscale image, 3 for an RGB) to be on the last axis in your data.
In your code, you can see that the NHWC relation holds (5 for number of images == batch_size, 14 for height, 14 for width, and 1 for image channel).
I have image with size that's not even, so when convolution scales it down by a factor of 2, and then I do Conv2DTranspose, I don't get consistent sizes, which is a problem.
So I thought I'd pad the intermediate tensor with an extra row and column, with values same as what I see on the edges, for minimal disruption. How do I do this in Keras, is it even possible? What are my alternatives?
With Tensorflow for background, you could use tf.concat() to add to your tensor a duplicate of the row/column.
Supposing you want to duplicate the last row/column:
import tensorflow as tf
from keras.layers import Lambda, Input
from keras.models import Model
import numpy as np
def duplicate_last_row(tensor):
return tf.concat((tensor, tf.expand_dims(tensor[:, -1, ...], 1)), axis=1)
def duplicate_last_col(tensor):
return tf.concat((tensor, tf.expand_dims(tensor[:, :, -1, ...], 2)), axis=2)
# --------------
# Demonstrating with TF:
x = tf.convert_to_tensor([[[1, 2, 3], [4, 5, 6]],
[[10, 20, 30], [40, 50, 60]]])
x = duplicate_last_row(duplicate_last_col(x))
with tf.Session() as sess:
print(sess.run(x))
# [[[ 1 2 3 3]
# [ 4 5 6 6]
# [ 4 5 6 6]]
#
# [[10 20 30 30]
# [40 50 60 60]
# [40 50 60 60]]]
# --------------
# Using as a Keras Layer:
inputs = Input(shape=(5, 5, 3))
padded = Lambda(lambda t: duplicate_last_row(duplicate_last_col(t)))(inputs)
model = Model(inputs=inputs, outputs=padded)
model.compile(optimizer="adam", loss='mse', metrics=['mse'])
batch = np.random.rand(2, 5, 5, 3)
x = model.predict(batch, batch_size=2)
print(x.shape)
# (2, 6, 6, 3)
I have a 3D tensor called X, of shape say [2,20,300] and I would like to apply dropout to only the third dimension. However, I want the dropped elements to be the same for the 20 instances (second dimension) but not necessarily for first dimension.
What is the behaviour of the following:
tf.nn.dropout(X[0], keep_prob=p)
Would it only act on the dimension that I want? If so, then for multiple first dimensions, I could loop over them and apply the above line.
See the documentation of tf.nn.dropout:
By default, each element is kept or dropped independently. If
noise_shape is specified, it must be broadcastable to the shape of x,
and only dimensions with noise_shape[i] == shape(x)[i] will make
independent decisions
So it is as simple as:
import tensorflow as tf
import numpy as np
data = np.arange(300).reshape((1, 1, 300))
data = np.tile(data, (2, 20, 1))
data_op = tf.convert_to_tensor(data.astype(np.float32))
data_op = tf.nn.dropout(data_op, 0.5, noise_shape=[2, 1, 300])
with tf.Session() as sess:
data = sess.run(data_op)
for b in range(2):
for c in range(20):
assert np.allclose(data[0, 0, :], data[0, c, :])
assert np.allclose(data[1, 0, :], data[1, c, :])
print((data[0, 0, :] - data[1, 0, :]).sum())
# output something != 0 with high probability#