Optimize this loss function (any way to vectorize it?) - numpy

def get_model_score(preds, actuals, sw):
total_loss = 0
for i in range(len(preds)):
for idx, v in enumerate(actuals[i]):
if v != 0:
total_loss += sw[i] * abs(preds[i][idx] - actuals[i][idx])
loss = total_loss / (sum(sw) * len(preds))
return loss
I have a loss function which essentially is a weighted absolute mean error. However, we can expect every "true" sample to only have one non-zero value ex. [0, 0, 1]. We only want to account for the loss between this non-zero value and the corresponding predicted value.
Take the following examples:
True: [0, 0, 1]
Predicted: [0.5, -0.5, 0.5]
The loss for this sample would simply just be 0.5. (In the actual function though we do also have an array of sample-wise weights as well- "sw")
That being said I'm having trouble figuring out if my function can be vectorized and put into Numpy.

Looks like this is what did the trick
> np.sum(np.abs(actuals[~np.isnan(actuals)] - preds[~np.isnan(actuals)])
> * sw) / (sum(sw) * len(preds))
Actually ended up going with nulls instead of zeros so the condition is ~np.isnan(actuals).
But yea, I think the trick was the use the condition on both the actual and pred array. On the pred array it will grab the correct index based on the condition from the actual array. If this helps for anyone doing something similar.

Related

How to: TensorFlow-Probability custom loss that ignores NA values (or otherwise masks loss)

I seek to implement in TensorFlow-Probability a masked loss function, that can ignore NAs in the labels.
This is a well worn task for regular tensors. I cannot find an example for distributions.
My distributions are sized (batch, time-steps, outputs) (512, 251 days, 1 to 8 time series)
The traditional loss function given in examples is this using the distribution's log probability.
neg_log_likelihood <- function (x, rv_x) {
-1*(rv_x %>% tfd_log_prob(x))
}
When I replace NAs with zeros, the model trains fine and converges. When I leave in NAs it produces NaN losses as expected.
I've experimented with many different permutations of tf$where to replace loss with 0, the label with 0, etc. In each of those cases the model stops training and loss stays near some constant. That's the case even when there's just a single NA in the labels.
neg_log_likelihood_missing <- function (x, rv_x) {
loss = -1*( rv_x %>% tfd_log_prob(x) )
loss_nonan = tf$where( tf$math$is_finite(x) , loss, 0 )
return(
loss_nonan
)
}
My use of R here is incidental, and any examples in python or otherwise I can translate. If there's a correct way to this so that losses correctly back-propagate, I would greatly appreciate it.
If you are using gradient based inference, you may need the "double where" trick.
While this gets you a correct value of y:
y = computation(x)
tf.where(is_nan(y), 0, y)
...the derivative of the tf.where can still have a nan.
Instead write:
safe_x = tf.where(is_unsafe(x), some_safe_x, x)
y = computation(safe_x)
tf.where(is_unsafe(x), 0, y)
...to get both a safe y out and a safe dy/dx.
For the case you're considering, perhaps write:
class MyMaskedDist(tfd.Distribution):
...
def _log_prob(self, x):
safe_x = tf.where(tf.is_nan(x), self.mode(), x)
lp = compute_log_prob(safe_x)
lp = tf.where(tf.is_nan(x), tf.zeros([], lp.dtype), lp)
return lp

Calculate weighted statistical moments in Python

I've been looking for a function or package that would allow me to calculate the skew and kurtosis of a distribution in a weighted way, as I have histogram data.
For instance I have the data
import numpy as np
np.array([[1, 2],
[2, 5],
[3, 6],
[4,12],
[5, 1])
where the first column [1,2,3,4,5] are the values and the second column [2,5,6,12,1] are the frequencies of the values.
I have found out how to do the first two moments (mean, standard deviation) in a weighted way using the weighted_avg_and_std function specified in this thread, but I was not quite sure how I could extend this to both the skew and kurtosis, or even the nth statistical moment.
I have found the definitions themselves here and could manually write functions to implement this from scratch, but before I go and do that I was wondering if there were any existing packages or functions that might be able to do this.
Thanks
EDIT:
I figured it out, the following code works (please note that this is for population moments)
skewnewss = np.average(((values-average)/np.sqrt(variance))**3, weights=weights)
and
kurtosis=np.average(((values-average)/np.sqrt(variance))**4-3, weights=weights)
I think you have already listed all the ingredients that you need, following the formulas in the link you provided:
import numpy as np
a = np.array([[1,2],[2,5],[3,6],[4,12],[5,1]])
values, weights = a.T
def n_weighted_moment(values, weights, n):
assert n>0 & (values.shape == weights.shape)
w_avg = np.average(values, weights = weights)
w_var = np.sum(weights * (values - w_avg)**2)/np.sum(weights)
if n==1:
return w_avg
elif n==2:
return w_var
else:
w_std = np.sqrt(w_var)
return np.sum(weights * ((values - w_avg)/w_std)**n)/np.sum(weights)
#Same as np.average(((values - w_avg)/w_std)**n, weights=weights)
Which results in:
for n in range(1,5):
print(f'Moment {n} value is {n_weighted_moment(values, weights, n)}')
Moment 1 value is 3.1923076923076925
Moment 2 value is 1.0784023668639053
Moment 3 value is -0.5962505715592139
Moment 4 value is 2.384432138280637
Notice that while you are calculating the excess kurtosis, the formula implemented for a generic n-moment doesn't account for that.
Taken from here
Here is the code
def weighted_mean(var, wts):
"""Calculates the weighted mean"""
return np.average(var, weights=wts)
def weighted_variance(var, wts):
"""Calculates the weighted variance"""
return np.average((var - weighted_mean(var, wts))**2, weights=wts)
def weighted_skew(var, wts):
"""Calculates the weighted skewness"""
return (np.average((var - weighted_mean(var, wts))**3, weights=wts) /
weighted_variance(var, wts)**(1.5))
def weighted_kurtosis(var, wts):
"""Calculates the weighted skewness"""
return (np.average((var - weighted_mean(var, wts))**4, weights=wts) /
weighted_variance(var, wts)**(2))

tf.argmax is returning a random high value , outside the valid dimension range

I have the following piece of code, where I have a tensor of dimensions (150,240,240).
Now from these 150 slices, I want to construct one slice (of size 240 by 240) by comparing all 150 slices across each of the values in the 240 by 240 matrix. I use tf.argmax for that . It usually goes right. But in some cases in the result, one of the values is really huge and random like 4294967390. How is that possible ? It should have returned a value between 0 and 149 for every dimension. Following is my code for doing that .
Note in the code below dimension of the variable result - (20,150,240,240)
for i in range(0, 20):
denominator = tf.reduce_logsumexp(result[i, :, :, :], axis=0)
if i == 0:
stackofArrs = tf.argmax(tf.exp(result[i, :, :, :]-denominator), axis=0)
else:
stackofArrs = tf.concat([stackofArrs, tf.argmax(tf.exp(result[i, :, :, :]-denominator), axis=0)], axis=0)
I thought if the logsumexp operation is causing any overflow ? But even in that case argmax shouldn't return a crazy value like this right ?
tf.argmax will output out of range values if it is applied on a tensor containing NaN or Inf values. You should make sure those are not present before applying the tf.argmax

How does tf.nn.moments calculate variance?

Look at the test example:
import tensorflow as tf
x = tf.constant([[1,2],[3,4],[5,6]])
mean, variance = tf.nn.moments(x, [0])
with tf.Session() as sess:
m, v = sess.run([mean, variance])
print(m, v)
The output is:
[3 4]
[2 2]
We want to calculate variance along the axis 0, the first column is [1,3,5], and mean = (1+3+5)/3=3, it is right, the variance = [(1-3)^2+(3-3)^2+(5-3)^2]/3=2.6666, but the output is 2, who can tell me how tf.nn.moments calculates variance?
By the way, view the API DOC, what does shift do?
The problem is that x is an integer tensor and TensorFlow, instead of forcing a conversion, performs the computation as good as it can without changing the type (so the outputs are also integers). You can pass float numbers in the construction of x or specify the dtype parameter of tf.constant:
x = tf.constant([[1,2],[3,4],[5,6]], dtype=tf.float32)
Then you get the expected result:
import tensorflow as tf
x = tf.constant([[1,2],[3,4],[5,6]], dtype=tf.float32)
mean, variance = tf.nn.moments(x, [0])
with tf.Session() as sess:
m, v = sess.run([mean, variance])
print(m, v)
>>> [ 3. 4.] [ 2.66666675 2.66666675]
About the shift parameter, it seems to allow you specify a value to, well, "shift" the input. By shift they mean subtract, so if your input is [1., 2., 4.] and you give a shift of, say, 2.5, TensorFlow would first subtract that amount and compute the moments from [-1.5, 0.5, 1.5]. In general, it seems safe to just leave it as None, which will perform a shift by the mean of the input, but I suppose there may be cases where giving a predetermined shift value (e.g. if you know or have an approximate idea of the mean of the input) may yield better numerical stability.
# Replace the following line with correct data dtype
x = tf.constant([[1,2],[3,4],[5,6]])
# suppose you don't want tensorflow to trim the decimal then use float data type.
x = tf.constant([[1,2],[3,4],[5,6]], dtype=tf.float32)
Results: array([ 2.66666675, 2.66666675], dtype=float32)
Note: from the original implementation shift is not used

Create color histogram of an image using tensorflow

Is there a neat way to compute a color histogram of an image? Maybe by abusing the internal code of tf.histogram_summary? From what I've seen, this code is not very modular and calls directly some C++ code.
Thanks in advance.
I would use tf.unsorted_segment_sum, where the "segment IDs" are computed from the color values and the thing you sum is a tf.ones vector. Note that tf.unsorted_segment_sum is probably better thought of as "bucket sum". It implements dest[segment] += thing_to_sum -- exactly the operation you need for a histogram.
In slightly pseudocode (meaning I haven't run this):
binned_values = tf.reshape(tf.floor(img_r * (NUM_BINS-1)), [-1])
binned_values = tf.cast(binned_values, tf.int32)
ones = tf.ones_like(binned_values, dtype=tf.int32)
counts = tf.unsorted_segment_sum(ones, binned_values, NUM_BINS)
You could accomplish this in one pass instead of separating out the r, g, and b values with a split if you wanted to cleverly construct your "ones" to look like "100100..." for red, "010010" for green, etc., but I suspect it would be slower overall, and harder to read. I'd just do the split that you proposed above.
This is what I'm using right now:
# Assumption: img is a tensor of the size [img_width, img_height, 3], normalized to the range [-1, 1].
with tf.variable_scope('color_hist_producer') as scope:
bin_size = 0.2
hist_entries = []
# Split image into single channels
img_r, img_g, img_b = tf.split(2, 3, img)
for img_chan in [img_r, img_g, img_b]:
for idx, i in enumerate(np.arange(-1, 1, bin_size)):
gt = tf.greater(img_chan, i)
leq = tf.less_equal(img_chan, i + bin_size)
# Put together with logical_and, cast to float and sum up entries -> gives count for current bin.
hist_entries.append(tf.reduce_sum(tf.cast(tf.logical_and(gt, leq), tf.float32)))
# Pack scalars together to a tensor, then normalize histogram.
hist = tf.nn.l2_normalize(tf.pack(hist_entries), 0)
tf.histogram_fixed_width
might be what you are looking for...
Full documentation on
https://www.tensorflow.org/api_docs/python/tf/histogram_fixed_width