I'm beginner in tensorflow and i want to apply Frobenius normalization on a tensor but when i searched i didn't find any function related to it in tensorflow and i couldn't implement it using tensorflow ops, i can implement it with numpy operations, but how can i do this using tensorflow ops only ??
My implementation using numpy in python
def Frobenius_Norm(tensor):
x = np.power(tensor,2)
x = np.sum(x)
x = np.sqrt(x)
return x
def frobenius_norm_tf(M):
return tf.reduce_sum(M ** 2) ** 0.5
Related
I wonder if I can use a Tensorflow Dataset for training scikit-learn and other ML frameworks.
So, for example, can I take a tf.data.dataset for training xgboost, LogisticReg, RandomForest classifier etc?
i.e. Can I pass the tf.data.dataset object into the .fit() method of these models, for training?
I tried out:
xs=np.asarray([i for i in range(10000)]).reshape(-1, 1)
ys=np.asarray([int(i%2==0)for i in range(10000)])
xs = tf.data.Dataset.from_tensor_slices(xs)
ys = tf.data.Dataset.from_tensor_slices(ys)
cls.fit(xs, ys)
I'm getting the following error:
TypeError: float() argument must be a string or a number, not 'TensorSliceDataset'
You can use the as_numpy_iterator() method; from the docs:
Returns an iterator which converts all elements of the dataset to numpy.
Following your example:
from sklearn.svm import SVC
x = list(xs.as_numpy_iterator())
y = list(ys.as_numpy_iterator())
clf = SVC(gamma='auto')
clf.fit(x, y)
I don't know hot to create a model that is maximizing binary cross_entropy loss in a keras model.
research:
1.https://intellipaat.com/community/17707/how-to-maximize-loss-function-in-keras
that said:
Simply multiply the loss by -1 to maximize the loss function while trying to minimize it:
new_loss = -loss
but using:
model.compile(loss=-1 * 'binary_crossentropy', optimizer=adam_optimizer())
resulted in this error:
ValueError: The model cannot be compiled because it has no loss to optimize.
https://stats.stackexchange.com/questions/303229/why-does-keras-binary-crossentropy-loss-function-return-wrong-values
gave me a custom function that approximates the keras binary_crossentropy loss:
import keras.backend as K
def binary_crossentropy(y_true, y_pred):
result = []
for i in range(len(y_pred)):
y_pred[i] = [max(min(x, 1 - K.epsilon()), K.epsilon()) for x in y_pred[i]]
result.append(-np.mean([y_true[i][j] * math.log(y_pred[i][j]) + (1 - y_true[i][j]) * math.log(1 - y_pred[i][j]) for j in range(len(y_pred[i]))]))
return np.mean(result)
but I can not use it since it results in the error:
len is not well defined for symbolic Tensors. (43_54/Sigmoid:0) Please call `x.shape` rather than `len(x)` for shape information.
when I replace len with .shape[0]
I get the another error:
__index__ returned non-int (type NoneType)
I tinkered with the syntax in several more ways but nothing seems to work.
any ideas?
python 3.6
tensorflow 1.15
keras 2.3.1
You just need to define a new loss, based on the keras implementation:
def neg_binary_crossentropy(y_true, y_pred):
return -1.0 * keras.losses.binary_crossentropy(y_true, y_pred)
And then use it in model.compile:
model.compile(loss=neg_binary_crossentropy, optimizer="adam")
As stated in the title, is there a TensorFlow equivalent of the numpy.all() function to check if all the values in a bool tensor are True? What is the best way to implement such a check?
Use tf.reduce_all, as follows:
import tensorflow as tf
a=tf.constant([True,False,True,True],dtype=tf.bool)
res=tf.reduce_all(a)
sess=tf.InteractiveSession()
res.eval()
This returns False.
On the other hand, this returns True:
import tensorflow as tf
a=tf.constant([True,True,True,True],dtype=tf.bool)
res=tf.reduce_all(a)
sess=tf.InteractiveSession()
res.eval()
You could use tf.experimental.numpy.all in tf 2.4
x = tf.constant([False, False])
tf.experimental.numpy.all(x)
One way of solving this problem would be to do:
def all(bool_tensor):
bool_tensor = tf.cast(bool_tensor, tf.float32)
all_true = tf.equal(tf.reduce_mean(bool_tensor), 1.0)
return all_true
However, it's not a TensorFlow dedicated funciton. Just a workaround.
I'm trying to calculate the gradients of the samples from a Bernoulli distribution w.r.t. the probabilities p (of a sample being 1).
I tried using both the implementation of the Bernoulli distribution provided in tensorflow.contrib.distributions and my own simple implementation based on this discussion. However both methods fail when I try to calculate the gradients.
Using the Bernoulli implementation:
import tensorflow as tf
from tensorflow.contrib.distributions import Bernoulli
p = tf.constant([0.2, 0.6])
b = Bernoulli(p=p)
s = b.sample()
g = tf.gradients(s, p)
with tf.Session() as session:
print(session.run(g))
The above code gives me the following error:
TypeError: Fetch argument None has invalid type <class 'NoneType'>
Using my implementation:
import tensorflow as tf
p = tf.constant([0.2, 0.6])
shape = [1, 2]
s = tf.select(tf.random_uniform(shape) - p > 0.0, tf.ones(shape), tf.zeros(shape))
g = tf.gradients(s, p)
with tf.Session() as session:
print(session.run(g))
Same error:
TypeError: Fetch argument None has invalid type <class 'NoneType'>
Is there a way to calculate the gradients of Bernoulli samples?
(My TensorFlow version is 0.12).
You cannot backprop through a discrete stochastic node for obvious reasons. As gradients are not defined.
However if you approximate the Bernoulli with a continuos distribution controlled by a temperature parameter, yes you can.
This idea is called reparametrization trick and is implemented in the RelaxedBernoulli in Tensorflow Probability (or also in TF.contrib library)
Relaxed bernoulli
You can specify the probability p of your Bernoulli, which is your random variable, et voilĂ .
I'm trying to write my own cost function in tensor flow, however apparently I cannot 'slice' the tensor object?
import tensorflow as tf
import numpy as np
# Establish variables
x = tf.placeholder("float", [None, 3])
W = tf.Variable(tf.zeros([3,6]))
b = tf.Variable(tf.zeros([6]))
# Establish model
y = tf.nn.softmax(tf.matmul(x,W) + b)
# Truth
y_ = tf.placeholder("float", [None,6])
def angle(v1, v2):
return np.arccos(np.sum(v1*v2,axis=1))
def normVec(y):
return np.cross(y[:,[0,2,4]],y[:,[1,3,5]])
angle_distance = -tf.reduce_sum(angle(normVec(y_),normVec(y)))
# This is the example code they give for cross entropy
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
I get the following error:
TypeError: Bad slice index [0, 2, 4] of type <type 'list'>
At present, tensorflow can't gather on axes other than the first - it's requested.
But for what you want to do in this specific situation, you can transpose, then gather 0,2,4, and then transpose back. It won't be crazy fast, but it works:
tf.transpose(tf.gather(tf.transpose(y), [0,2,4]))
This is a useful workaround for some of the limitations in the current implementation of gather.
(But it is also correct that you can't use a numpy slice on a tensorflow node - you can run it and slice the output, and also that you need to initialize those variables before you run. :). You're mixing tf and np in a way that doesn't work.
x = tf.Something(...)
is a tensorflow graph object. Numpy has no idea how to cope with such objects.
foo = tf.run(x)
is back to an object python can handle.
You typically want to keep your loss calculation in pure tensorflow, so do the cross and other functions in tf. You'll probably have to do the arccos the long way, as tf doesn't have a function for it.
just realized that the following failed:
cross_entropy = -tf.reduce_sum(y_*np.log(y))
you cant use numpy functions on tf objects, and the indexing my be different too.
I think you can use "Wraps Python function" method in tensorflow. Here's the link to the documentation.
And as for the people who answered "Why don't you just use tensorflow's built in function to construct it?" - sometimes the cost function people are looking for cannot be expressed in tf's functions or extremely difficult.
This is because you have not initialized your variable and because of this it does not have your Tensor there right now (can read more in my answer here)
Just do something like this:
def normVec(y):
print y
return np.cross(y[:,[0,2,4]],y[:,[1,3,5]])
t1 = normVec(y_)
# and comment everything after it.
To see that you do not have a Tensor now and only Tensor("Placeholder_1:0", shape=TensorShape([Dimension(None), Dimension(6)]), dtype=float32).
Try initializing your variables
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
and evaluate your variable sess.run(y). P.S. you have not fed your placeholders up till now.