When I try to roll two concatenate tensor it crash my notebook :
Please someone know what is incorrect here ? Thanks you very much
x = tf.convert_to_tensor([1,2],dtype="int32")
y = tf.zeros(shape=(2),dtype="int32")
z = tf.concat([x,y],axis=0)
tf.roll(z,1,axis=0)
Related
I used the tensorflow tf.keras.metrics.MeanSquaredError() metric to evaluate the mean squared error between two numpy arrays. But each time I call mse() it give a different result.
a = np.random.random(size=(100,2000))
b = np.random.random(size=(100,2000))
for i in range(100):
v = mse(a, b).numpy()
plt.scatter(i,v)
print(v)
where I had previously defined mse = tf.keras.metrics.MeanSquaredError() Here is the Output. Any idea what is going wrong?
np.random.random generates random data every run. So, your code should result in different mse, shouldn't it?
run 1:
[0.87148841 0.50221413 0.49858526 ... 0.22311888 0.71320089 0.36298912]
Run 2:
[0.14941241 0.78560523 0.62436783 ... 0.1865485 0.2730567 0.49300401]
I am trying to understand why the following code crashes my Colab session.
import numpy as np
import tensorflow as tf
x1 = np.random.rand(90000)
x2 = tf.random.uniform((90000,1)).numpy()
print(x1.shape, type(x1))
print(x2.shape, type(x2))
x1 - x2
I can see that memory is exploding which causes the crash but I was hoping someone can explain exactly why this is happening. I also understand that this has to do with broadcasting arrays in numpy and I am just wondering if this is expected behavior so I can avoid it in the future.
The fix is to np.squeze(x2, axis=1) so the vectors have the same shape but clearly there's something I don't understand about what numpy is doing under the hood. Any suggestions and clarifications welcome.
x1 has shape (90000,). x2 has shape (90000, 1). In the expression x1 - x2, broadcasting occurs (as you suspected), giving a result that has shape (90000, 90000). Such an array of floating point values requires 90000*90000*8 = 64800000000 bytes.
I'm a Tensorflow beginner.
I've tried to print Hello world using below Tensorflow 1.15.0 code.
import tensorflow as tf
h = tf.constant("Hello")
w = tf.constant(" World!")
hw = h + w
with tf.Session() as sess:
ans = sess.run(hw)
print(ans)
When I run the code using jupyter notebook, b'Hello World!' came out.
What I expected is only 'Hello World!". Why does the b come out in front of my output?
Many thank
The b prefix indicates that it is a byte string and not unicode string. You can use tf.print() to print it properly
This question has already been answered here: The print of string constant is always attached with 'b' inTensorFlow
This is kind of a pytorch beginner question. In pytorch I'm trying to do element wise division with two tensors of size [5,5,3]. In numpy it works fine using np.divide(), but somehow I get an error here. I'm using PyTorch version 0.1.12 for Python 3.5.
c = [torch.DoubleTensor of size 5x5x3]
input_patch = [torch.FloatTensor of size 5x5x3]
input_patch is a slice of a torch.autograd Variable, and c is made by doing c = torch.from_numpy(self.patch_filt[:, :, :, 0]).float()
When doing:
torch.div(input_patch, c)
I get this error that I don't understand.
line 317, in div
assert not torch.is_tensor(other)
AssertionError
Does it mean that variable c should not be a torch_tensor? After casting c to also be a FloatTensor gives the same error still.
Thank you!
Input_patch is a slice of a torch.autograd Variable, and c is made by doing
c = torch.from_numpy(self.patch_filt[:, :, :, 0]).float()
Anyway, mexmex, thanks to your comment I've solved it by defining c as
Variable(torch.from_numpy(self.patch_filt[:, :, :, 0])).float()
Trying debug statements in Python/tensorflow1.0 using jupyter , but does not get any output printed from tf.Print
Thought sess.run(during training in below code) should have evaluated db1 tensor and print output which did not happen
However db1.eval in evaluate phase , printing entire tensor X with out "message X:".
def combine_inputs(X):
db1=tf.Print(X,[X],message='X:')
return (tf.matmul(X, W) + b,db1)
<<training code>>
_,summary=sess.run([train_op,merged_summaries])
## merged_summaries tensor triggers combine_inputs function. There are
## other tensor functions/coding in between , not giving entire code to keep
## it simple; code works as expected except tf.Print
<<evaluate code>>
print(db1.eval())
Confused on following
a) Why tf.Print is not printing during sess.run during training?
b) Why explicit db1.eval is necessary , expected tf.Print to trigger with
sess.run. If eval is required , could copy tensor X in my code to db1
and evaluate it with out tf.Print. Correct?
Tried going through other questions (like below one). Suggested to implement memory_util or predefined function. As learner could not understand why tf.Print does not work in my scenario
If anyone encountered similar issues , please assist. Thanks!
Similar question in stackoverflow
According to the documentation, tf.Print prints to standard error (as of version 1.1), and it's not compatible with jupyter notebook. That's why you can't see any output.
Check here:
https://www.tensorflow.org/api_docs/python/tf/Print
You can check the terminal where you launched the jupyter notebook to see the message.
import tensorflow as tf
tf.InteractiveSession()
a = tf.constant(1)
b = tf.constant(2)
opt = a + b
opt = tf.Print(opt, [opt], message="1 + 2 = ")
opt.eval()
In the terminal, I can see:
2018-01-02 23:38:07.691808: I tensorflow/core/kernels/logging_ops.cc:79] 1 + 2 = [3]