How to make a histogram of tensor columns in tensorflow - tensorflow

I have a batch of images as a tensor of size [batch_size, w, h].
I wish to get a histogram of the values in each column.
This is what I came up with (but it works only for the first image in the batch and its also very slow):
global_hist = []
net = tf.squeeze(net)
for i in range(batch_size):
for j in range(1024):
hist = tf.histogram_fixed_width(tf.slice(net,[i,0,j],[1,1024,1]), [0.0, 0.2, 0.4, 0.6, 0.8, 1.0], nbins=10)
global_hist[i].append(hist)
Is there an efficient way to do this?

ok so I found a solution (though its rather slow and does not allow to fix the bins edges), but someone may find this usefull.
nbins=10
net = tf.squeeze(net)
for i in range(batch_size):
local_hist = tf.expand_dims(tf.histogram_fixed_width(tf.slice(net,[i,0,0],[1,1024,1]), [0.0, 1.0], nbins=nbins, dtype=tf.float32),[-1])
for j in range(1,1024):
hist = tf.histogram_fixed_width(tf.slice(net,[i,0,j],[1,1024,1]), [0.0, 1.0], nbins=nbins, dtype=tf.float32)
hist = tf.expand_dims(hist,[-1])
local_hist = tf.concat(1, [local_hist, hist])
if i==0:
global_hist = tf.expand_dims(local_hist, [0])
else:
global_hist = tf.concat(0, [global_hist, tf.expand_dims(local_hist,[0])])
In addition, I found this link to be very usefull
https://stackoverflow.com/questions/41764199/row-wise-histogram/41768777#41768777

Related

Can I use real probability distributions as labels for tf.nn.softmax_cross_entropy_with_logits?

In Tensorflow manual, description for labels is like below:
labels: Each row labels[i] must be a valid probability distribution.
Then, does it mean labels can be like below, if I have real probability distributions of classes for each input.
[[0.1, 0.2, 0.05, 0.007 ... ]
[0.001, 0.2, 0.5, 0.007 ... ]
[0.01, 0.0002, 0.005, 0.7 ... ]]
And, is it more efficient than one-hot encoded labels?
Thank you in advance.
In a word, yes, you can use probabilities as labels.
The documentation for tf.nn.softmax_cross_entropy_with_logits says you can:
NOTE: While the classes are mutually exclusive, their probabilities
need not be. All that is required is that each row of labels is
a valid probability distribution. If they are not, the computation of the
gradient will be incorrect.
If using exclusive labels (wherein one and only
one class is true at a time), see sparse_softmax_cross_entropy_with_logits.
Let's have a short example to be sure it works ok:
import numpy as np
import tensorflow as tf
labels = np.array([[0.2, 0.3, 0.5], [0.1, 0.7, 0.2]])
logits = np.array([[5.0, 7.0, 8.0], [1.0, 2.0, 4.0]])
sess = tf.Session()
ce = tf.nn.softmax_cross_entropy_with_logits(
labels=labels, logits=logits).eval(session=sess)
print(ce) # [ 1.24901222 1.86984602]
# manual check
predictions = np.exp(logits)
predictions = predictions / predictions.sum(axis=1, keepdims=True)
ce_np = (-labels * np.log(predictions)).sum(axis=1)
print(ce_np) # [ 1.24901222 1.86984602]
And if you have exclusive labels, it is better to use one-hot encoding and tf.nn.sparse_softmax_cross_entropy_with_logits rather than tf.nn.softmax_cross_entropy_with_logitsand explicit probability representation like [1.0, 0.0, ...]. You can have shorter representation that way.

Using rejection_resample() with the Dataset Api

I am having a hard time trying to make some balancing batching using the rejection_resample() along with the Dataset API. I am using images and labels (ints) as input, as you can glance in the code, but the rejection_resample() seems not to work as expected.
Note: I am using Tensorflow v1.3
Here I define the dataset, the dataset's distribution and the distribution I want.
target_dist = [0.1, 0.0, 0.0, 0.0, 0.9]
initial_dist = [0.1061, 0.3213, 0.4238, 0.1203, 0.0282]
training_filenames = training_records
training_dataset = tf.contrib.data.TFRecordDataset(training_filenames)
training_dataset = training_dataset.map(tf_record_parser) # Parse the record into tensors.
training_dataset = training_dataset.repeat() # number of epochs
training_dataset = training_dataset.shuffle(buffer_size=1000)
training_dataset = tf.contrib.data.rejection_resample(training_dataset,
class_func=lambda _, c: c,
target_dist=target_dist,
initial_dist=initial_dist)
# Return to the same Dataset shape as was the original input
training_dataset = training_dataset.map(lambda _, data: (data))
training_dataset = training_dataset.batch(64)
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.contrib.data.Iterator.from_string_handle(
handle, training_dataset.output_types, training_dataset.output_shapes)
batch_images, batch_labels = iterator.get_next()
training_iterator = training_dataset.make_initializable_iterator()
When I run this thing I should only get samples from the classes 0 and 4, but I get results from all of the classes, as though it did not work.
with tf.Session() as sess:
training_handle = sess.run(training_iterator.string_handle())
sess.run(training_iterator.initializer)
batch_faces_np, batch_label_np = sess.run([batch_images, batch_labels],feed_dict={handle: training_handle})
ctr = Counter(batch_label_np)
Counter({2: 31, 3: 22, 4: 6, 1: 5})
I tested with an example based on this post: Dataset API, Iterators and tf.contrib.data.rejection_resample and from the original testing code from the tensorflow repo and it works.
initial_known = True
classes = np.random.randint(5, size=(20000,)) # Uniformly sampled
target_dist = [0.5, 0.0, 0.0, 0.0, 0.4]
initial_dist = [0.2] * 5 if initial_known else None
iterator = dataset_ops.Iterator.from_dataset(
dataset_ops.rejection_resample(
(dataset_ops.Dataset.from_tensor_slices(classes)
.shuffle(200, seed=21)
.map(lambda c: (c, string_ops.as_string(c)))),
target_dist=target_dist,
initial_dist=initial_dist,
class_func=lambda c, _: c,
seed=27))
init_op = iterator.initializer
get_next = iterator.get_next()
variable_init_op = variables.global_variables_initializer()
with tf.Session() as sess:
sess.run(variable_init_op)
sess.run(init_op)
returned = []
while True:
returned.append(sess.run(get_next))
Counter({(0, (0, b'0')): 3873, (4, (4, b'4')): 3286})
Can you guys help me with that? Thanks.
Try with seed value for shuffle.
It worked with seed value for me.

SparseTensor * Vector

How can the following be achieved in tensorflow, when A is a tf.SparseTensor and b is a tf.Variable?
A = np.arange(5**2).reshape((5,5))
b = np.array([1.0, 2.0, 0.0, 0.0, 1.0])
C = A * b
If I try the same notation, I get InvalidArgumentError: Provided indices are out-of-bounds w.r.t. dense side with broadcasted shape.
* works for SparseTensor as well, your problem seems to be with related to the SparseTensor itself, you might have provided the indices that are out of the range of the shape you give to it, consider this example:
A_t = tf.SparseTensor(indices=[[0,6],[4,4]], values=[3.2,5.1], dense_shape=(5,5))
Notice the column index 6 is larger than the shape specified which should have maximum 5 columns, and this gives the same error as you've shown:
b = np.array([1.0, 2.0, 0.0, 0.0, 1.0])
B_t = tf.Variable(b, dtype=tf.float32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(A_t * B_t))
InvalidArgumentError (see above for traceback): Provided indices are
out-of-bounds w.r.t. dense side with broadcasted shape
Here is a working example:
A_t = tf.SparseTensor(indices=[[0,3],[4,4]], values=[3.2,5.1], dense_shape=(5,5))
b = np.array([1.0, 2.0, 0.0, 0.0, 1.0])
B_t = tf.Variable(b, dtype=tf.float32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(A_t * B_t))
# SparseTensorValue(indices=array([[0, 3],
# [4, 4]], dtype=int64), values=array([ 0. , 5.0999999], dtype=float32), dense_shape=array([5, 5], dtype=int64))

Give me a code example usage for tensorflow's tf.metrics.sparse_average_precision_at_k

Can you give me a code example that uses tf.metrics.sparse_average_precision_at_k? I cannot find anything on the Internet... :(
If I have a multi-labeled dataset like this one (each example may have more than one target label):
(total number of classes = 5)
y1 = [class_0, class_1]
y2 = [class_1, class_2]
y3 = [class_0]
and my system predicts:
prediction for y1 -> [0.1, 0.3, 0.2, 0.0, 0.0]
prediction for y2 -> [0.0, 0.3, 0.7, 0.4, 0.4]
prediction for y3 -> [0.1, 0.3, 0.2, 0.3, 0.5]
How can I compute for k=3, for example?
P.S.: Feel free to suggest your own example, if you can't comprehend this one.
EDIT: My code so far:
I really don't get it... Pls advise for a single prediction (y1 only) as well as for several predictions at once (with different number of true classes in each).
import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
tf.local_variables_initializer().run()
y1 = tf.constant( np.array([0, 1]) )
y2 = tf.constant( np.array([1, 2]) )
y3 = tf.constant( np.array([0]) )
p1 = tf.constant( np.array([0.1, 0.3, 0.2, 0.0, 0.0]) )
p2 = tf.constant( np.array([0.0, 0.3, 0.7, 0.4, 0.4]) )
p3 = tf.constant( np.array([0.1, 0.3, 0.2, 0.3, 0.5]) )
metric, update = tf.metrics.sparse_average_precision_at_k(tf.cast(y1, tf.int64), p1, 3)
print(sess.run(update))
The tf.metrics.sparse_average_precision_at_k will be replaced by tf.metrics.average_precision_at_k. And by browsing the code in tensorflow, you will find that when your inputs are y_true and y_pred, this function will actually transform the y_pred to y_pred_idx, by using top_k function.
y_true is a tensor of shape (batch_size, num_labels), and y_pred is of shape (batch_size, num_classes)
You can also see some discussion in this issue, and this example comes from this issue.
import tensorflow as tf
import numpy as np
y_true = np.array([[2], [1], [0], [3], [0]]).astype(np.int64)
y_true = tf.identity(y_true)
y_pred = np.array([[0.1, 0.2, 0.6, 0.1],
[0.8, 0.05, 0.1, 0.05],
[0.3, 0.4, 0.1, 0.2],
[0.6, 0.25, 0.1, 0.05],
[0.1, 0.2, 0.6, 0.1]
]).astype(np.float32)
y_pred = tf.identity(y_pred)
_, m_ap = tf.metrics.sparse_average_precision_at_k(y_true, y_pred, 3)
sess = tf.Session()
sess.run(tf.local_variables_initializer())
stream_vars = [i for i in tf.local_variables()]
tf_map = sess.run(m_ap)
print(tf_map)
print((sess.run(stream_vars)))
tmp_rank = tf.nn.top_k(y_pred,3)
print(sess.run(tmp_rank))
This line stream_vars = [i for i in tf.local_variables()] helps you see the two local_variables which is created in this tf.metrics.sparse_average_precision_at_k function.
This line tmp_rank = tf.nn.top_k(y_pred,3) in order to helps you understand by changing the value of k ,the prediction index which is used in tf.metrics.sparse_average_precision_at_k .
You can change the value of k to see the different result, and the tmp_rank represents the index which is used in calculating the average precision.
For example:
when k=1, only the first batch match the label, so the average precision at 1 result will be 1/6 = 0.16666666.
When k=2, the third batch will also match the label, so the average precision at 2 result will be (1+(1/2))/6=0.25.
metric, update = tf.metrics.sparse_average_precision_at_k(tf.stack(y1, y2, y3), tf.stack(p1, p2, p3), 3)
print session.run(update)

Return all possible prediction values

This neural network trains on inputs [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]] with labelled outputs : [[0.0], [1.0], [1.0], [0.0]]
import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
sess.run(init)
# a batch of inputs of 2 value each
inputs = tf.placeholder(tf.float32, shape=[None, 2])
# a batch of output of 1 value each
desired_outputs = tf.placeholder(tf.float32, shape=[None, 1])
# [!] define the number of hidden units in the first layer
HIDDEN_UNITS = 4
weights_1 = tf.Variable(tf.truncated_normal([2, HIDDEN_UNITS]))
biases_1 = tf.Variable(tf.zeros([HIDDEN_UNITS]))
# connect 2 inputs to every hidden unit. Add bias
layer_1_outputs = tf.nn.sigmoid(tf.matmul(inputs, weights_1) + biases_1)
print layer_1_outputs
NUMBER_OUTPUT_NEURONS = 1
biases_2 = tf.Variable(tf.zeros([NUMBER_OUTPUT_NEURONS]))
weights_2 = tf.Variable(tf.truncated_normal([HIDDEN_UNITS, NUMBER_OUTPUT_NEURONS]))
finalLayerOutputs = tf.nn.sigmoid(tf.matmul(layer_1_outputs, weights_2) + biases_2)
tf.global_variables_initializer().run()
logits = tf.nn.sigmoid(tf.matmul(layer_1_outputs, weights_2) + biases_2)
training_inputs = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
training_outputs = [[0.0], [1.0], [1.0], [0.0]]
error_function = 0.5 * tf.reduce_sum(tf.sub(logits, desired_outputs) * tf.sub(logits, desired_outputs))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)
for i in range(15):
_, loss = sess.run([train_step, error_function],
feed_dict={inputs: np.array(training_inputs),
desired_outputs: np.array(training_outputs)})
print(sess.run(logits, feed_dict={inputs: np.array([[0.0, 1.0]])}))
Upon training this network returns [[ 0.61094815]] for values [[0.0, 1.0]]
[[ 0.61094815]] is value with highest probability after training this network is assign to input value [[0.0, 1.0]] ? Can the lower probability values also be accessed and not just most probable ?
If I increase number of training epochs I'll get better prediction but in this case I just want to access all potential values with their probabilities for a given input.
Update :
Have updated code to use multi class classification with softmax. But the prediction for [[0.0, 1.0, 0.0, 0.0]] is [array([0])]. Have I updated correctly ?
import numpy as np
import tensorflow as tf
init = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init)
# a batch of inputs of 2 value each
inputs = tf.placeholder(tf.float32, shape=[None, 4])
# a batch of output of 1 value each
desired_outputs = tf.placeholder(tf.float32, shape=[None, 3])
# [!] define the number of hidden units in the first layer
HIDDEN_UNITS = 4
weights_1 = tf.Variable(tf.truncated_normal([4, HIDDEN_UNITS]))
biases_1 = tf.Variable(tf.zeros([HIDDEN_UNITS]))
# connect 2 inputs to every hidden unit. Add bias
layer_1_outputs = tf.nn.softmax(tf.matmul(inputs, weights_1) + biases_1)
biases_2 = tf.Variable(tf.zeros([3]))
weights_2 = tf.Variable(tf.truncated_normal([HIDDEN_UNITS, 3]))
finalLayerOutputs = tf.nn.softmax(tf.matmul(layer_1_outputs, weights_2) + biases_2)
tf.global_variables_initializer().run()
logits = tf.nn.softmax(tf.matmul(layer_1_outputs, weights_2) + biases_2)
training_inputs = [[0.0, 0.0 , 0.0, 0.0], [0.0, 1.0 , 0.0, 0.0], [1.0, 0.0 , 0.0, 0.0], [1.0, 1.0 , 0.0, 0.0]]
training_outputs = [[0.0,0.0,0.0], [1.0,0.0,0.0], [1.0,0.0,0.0], [0.0,0.0,1.0]]
error_function = 0.5 * tf.reduce_sum(tf.sub(logits, desired_outputs) * tf.sub(logits, desired_outputs))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)
for i in range(15):
_, loss = sess.run([train_step, error_function],
feed_dict={inputs: np.array(training_inputs),
desired_outputs: np.array(training_outputs)})
prediction=tf.argmax(logits,1)
best = sess.run([prediction],feed_dict={inputs: np.array([[0.0, 1.0, 0.0, 0.0]])})
print(best)
Which prints [array([0])]
Update 2 :
Replacing
prediction=tf.argmax(logits,1)
best = sess.run([prediction],feed_dict={inputs: np.array([[0.0, 1.0, 0.0, 0.0]])})
print(best)
With :
prediction=tf.nn.softmax(logits)
best = sess.run([prediction],feed_dict={inputs: np.array([[0.0, 1.0, 0.0, 0.0]])})
print(best)
Appears to fix issue.
So now full source is :
import numpy as np
import tensorflow as tf
init = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init)
# a batch of inputs of 2 value each
inputs = tf.placeholder(tf.float32, shape=[None, 4])
# a batch of output of 1 value each
desired_outputs = tf.placeholder(tf.float32, shape=[None, 3])
# [!] define the number of hidden units in the first layer
HIDDEN_UNITS = 4
weights_1 = tf.Variable(tf.truncated_normal([4, HIDDEN_UNITS]))
biases_1 = tf.Variable(tf.zeros([HIDDEN_UNITS]))
# connect 2 inputs to every hidden unit. Add bias
layer_1_outputs = tf.nn.softmax(tf.matmul(inputs, weights_1) + biases_1)
biases_2 = tf.Variable(tf.zeros([3]))
weights_2 = tf.Variable(tf.truncated_normal([HIDDEN_UNITS, 3]))
finalLayerOutputs = tf.nn.softmax(tf.matmul(layer_1_outputs, weights_2) + biases_2)
tf.global_variables_initializer().run()
logits = tf.nn.softmax(tf.matmul(layer_1_outputs, weights_2) + biases_2)
training_inputs = [[0.0, 0.0 , 0.0, 0.0], [0.0, 1.0 , 0.0, 0.0], [1.0, 0.0 , 0.0, 0.0], [1.0, 1.0 , 0.0, 0.0]]
training_outputs = [[0.0,0.0,0.0], [1.0,0.0,0.0], [1.0,0.0,0.0], [0.0,0.0,1.0]]
error_function = 0.5 * tf.reduce_sum(tf.sub(logits, desired_outputs) * tf.sub(logits, desired_outputs))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)
for i in range(1500):
_, loss = sess.run([train_step, error_function],
feed_dict={inputs: np.array(training_inputs),
desired_outputs: np.array(training_outputs)})
prediction=tf.nn.softmax(logits)
best = sess.run([prediction],feed_dict={inputs: np.array([[0.0, 1.0, 0.0, 0.0]])})
print(best)
Which prints
[array([[ 0.49810624, 0.24845563, 0.25343812]], dtype=float32)]
Your current network does (logistic) regression, not really classification: given an input x, it tries to evaluate f(x) (where f(x) = x1 XOR x2 here, but the network does not know that before training), which is regression. To do so, it learns a function f1(x) and tries to have it be as close to f(x) on all your training samples. [[ 0.61094815]] is just the value of f1([[0.0, 1.0]]). In this setting, there is no such thing as "probability to be in a class", since there is no class. There is only the user (you) chosing to interpret f1(x) as a probability for the output to be 1. Since you have only 2 classes, that tells you that the probability of the other class is 1-0.61094815 (that is, you're doing classification with the output of the network, but it is not really trained to do that in itself). This method used as classification is, in a way, a (widely used) trick to perform classification, but only works if you have 2 classes.
A real network for classification would be built a bit differently: your logits would be of shape (batch_size, number_of_classes) - so (1, 2) in your case-, you apply a sofmax on them, and then the prediction is argmax(softmax), with probability max(softmax). Then you can also get the probability of each output, according to the network: probability(class i) = softmax[i]. Here the network is really trained to learn the probability of x being in each class.
I'm sorry if my explanation is obscure or if the difference between regression between 0 and 1 and classification seems philosophical in a setting with 2 classes, but if you add more classes you'll probably see what I mean.
EDIT
Answer to your 2 updates.
in your training samples, the labels (training_outputs) must be probability distributions, i.e. they must have sum 1 for each sample (99% of the time they are of the form (1, 0, 0), (0, 1, 0) or (0, 0, 1)), so your first output [0.0,0.0,0.0] is not valid. If you want to learn XOR on the two first inputs, then the 1st output should be the same as the last: [0.0,0.0,1.0].
prediction=tf.argmax(logits,1) = [array([0])] is completely normal: loginscontains your probabilities, and prediction is the prediction, which is the class with the biggest probability, which is in your case class 0: in your training set, [0.0, 1.0, 0.0, 0.0] is associated with output [1.0, 0.0, 0.0], i.e. it is of class 0 with probability 1, and of the other classes with probability 0. After enough training, print(best) with prediction=tf.argmax(logits,1) on input [1.0, 1.0 , 0.0, 0.0] should give you [array([2])], 2 being the index of the class for this input in your training set.