Grid search and cross-validation - data-science

X and y are observations and target correspodingly.
logreg = LogisticRegression(random_state=0)
parameters_logreg = {'C': [0.1, 1, 0.5], 'max_iter': [100, 102] }
gs_logreg = GridSearchCV(logreg, parameters_logreg, cv = 5)
gs_logreg.fit(X, y)
cv_logreg = KFold(n_splits=5, shuffle=True, random_state=9)
cross_val_score(gs_logreg, X, y, cv=cv_logreg, scoring='roc_auc')
I am doing a classification problem using logistic regression. Applying grid search I find the best hyperparamenters. After that I calculate scores on cross validation folds.
My first question is will gs_logreg.fit(X, y) effect the final scores on cross validation? How cross_val_scores works? It fits gs_logreg ones more but now on cv folds? Can it remember y after first fiting gs_logreg.fit(X, y) ?
The second question is this code correct? I surprisingly got high scores for my stupid model.

To answer your first question, why not comment out that statement (gs_logreg.fit(X, y)), rerun and check if the cross validation results have changed? If they do change, then that statement affects the final scores, otherwise it does not.

Related

How to: TensorFlow-Probability custom loss that ignores NA values (or otherwise masks loss)

I seek to implement in TensorFlow-Probability a masked loss function, that can ignore NAs in the labels.
This is a well worn task for regular tensors. I cannot find an example for distributions.
My distributions are sized (batch, time-steps, outputs) (512, 251 days, 1 to 8 time series)
The traditional loss function given in examples is this using the distribution's log probability.
neg_log_likelihood <- function (x, rv_x) {
-1*(rv_x %>% tfd_log_prob(x))
}
When I replace NAs with zeros, the model trains fine and converges. When I leave in NAs it produces NaN losses as expected.
I've experimented with many different permutations of tf$where to replace loss with 0, the label with 0, etc. In each of those cases the model stops training and loss stays near some constant. That's the case even when there's just a single NA in the labels.
neg_log_likelihood_missing <- function (x, rv_x) {
loss = -1*( rv_x %>% tfd_log_prob(x) )
loss_nonan = tf$where( tf$math$is_finite(x) , loss, 0 )
return(
loss_nonan
)
}
My use of R here is incidental, and any examples in python or otherwise I can translate. If there's a correct way to this so that losses correctly back-propagate, I would greatly appreciate it.
If you are using gradient based inference, you may need the "double where" trick.
While this gets you a correct value of y:
y = computation(x)
tf.where(is_nan(y), 0, y)
...the derivative of the tf.where can still have a nan.
Instead write:
safe_x = tf.where(is_unsafe(x), some_safe_x, x)
y = computation(safe_x)
tf.where(is_unsafe(x), 0, y)
...to get both a safe y out and a safe dy/dx.
For the case you're considering, perhaps write:
class MyMaskedDist(tfd.Distribution):
...
def _log_prob(self, x):
safe_x = tf.where(tf.is_nan(x), self.mode(), x)
lp = compute_log_prob(safe_x)
lp = tf.where(tf.is_nan(x), tf.zeros([], lp.dtype), lp)
return lp

Calculate prediction derivation in own loss function

in addition to the MSE of y_true and y_predict i would like to use the second derivative of y_true in the cost function, because my model is currently very dynamic. Suppose I have y_predicted (256, 100, 1). The first dimension corresponds to the samples (delta_t between each sample is 0.1s). Now I would like to differentiate via the first dimension, i.e.
diff(diff(y_predicted[1, :, 1]))/delta_t**2
for each row (0-dim) in y_predictied.
Note, I only want to use y_predicted and delta_t to differentiate
Thank you very much,
Max
To calculate the second order derivative you could use tf.hessians as follow:
x = tf.Variable([7])
x2 = x * x
d2x2 = tf.hessians(x2, x)
Evaluating d2x2 yields:
[array([[2]], dtype=int32)]
In your case, you could do
loss += lam_l1 * tf.hessians(y_pred, xs)
where xs are the tensors with respect to which you would like to differentiate.
If you wish to use Keras directly, you can chain twice keras.backend.gradients(loss, variables), there is no Keras equivalent of tf.hessians.

How to deal with Imbalanced Dataset for Multi Label Classification

I was wondering how to penalize less represented classes more then other classes when dealing with a really imbalanced dataset (10 classes over about 20000 samples but here is th number of occurence for each class : [10868 26 4797 26 8320 26 5278 9412 4485 16172 ]).
I read about the Tensorflow function : weighted_cross_entropy_with_logits (https://www.tensorflow.org/api_docs/python/tf/nn/weighted_cross_entropy_with_logits) but I am not sure I can use it for a multi label problem.
I found a post that sum up perfectly the problem I have (Neural Network for Imbalanced Multi-Class Multi-Label Classification) and that propose an idea but it had no answers and I thought the idea might be good :)
Thank you for your ideas and answers !
First of all, there is my suggestion you can modify your cost function to use in a multi-label way. There is code which show how to use Softmax Cross Entropy in Tensorflow for multilabel image task.
With that code, you can multiple weights in each row of loss calculation. Here is the example code in case you have multi-label task: (i.e, each image can have two labels)
logits_split = tf.split( axis=1, num_or_size_splits=2, value= logits )
labels_split = tf.split( axis=1, num_or_size_splits=2, value= labels )
weights_split = tf.split( axis=1, num_or_size_splits=2, value= weights )
total = 0.0
for i in range ( len(logits_split) ):
temp = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( logits=logits_split[i] , labels=labels_split[i] ))
total += temp * tf.reshape(weights_split[i],[-1])
I think you can just use tf.nn.weighted_cross_entropy_with_logits for multiclass classification.
For example, for 4 classes, where the ratios to the class with the largest number of members are [0.8, 0.5, 0.6, 1], You would just give it a weight vector in the following way:
cross_entropy = tf.nn.weighted_cross_entropy_with_logits(
targets=ground_truth_input, logits=logits,
pos_weight = tf.constant([0.8,0.5,0.6,1]))
So I am not entirely sure that I understand your problem given what you have written. The post you link to writes about multi-label AND multi-class, but that doesn't really make sense given what is written there either. So I will approach this as a multi-class problem where for each sample, you have a single label.
In order to penalize the classes, I implemented a weight Tensor based on the labels in the current batch. For a 3-class problem, you could eg. define the weights as the inverse frequency of the classes, such that if the proportions are [0.1, 0.7, 0.2] for class 1, 2 and 3, respectively, the weights will be [10, 1.43, 5]. Defining a weight tensor based on the current batch is then
weight_per_class = tf.constant([10, 1.43, 5]) # shape (, num_classes)
onehot_labels = tf.one_hot(labels, depth=3) # shape (batch_size, num_classes)
weights = tf.reduce_sum(
tf.multiply(onehot_labels, weight_per_class), axis=1) # shape (batch_size, num_classes)
reduction = tf.losses.Reduction.MEAN # this ensures that we get a weighted mean
loss = tf.losses.softmax_cross_entropy(
onehot_labels=onehot_labels, logits=logits, weights=weights, reduction=reduction)
Using softmax ensures that the classification problem is not 3 independent classifications.

Linear Regression overfitting

I'm pursuing course 2 on this coursera course on linear regression (https://www.coursera.org/specializations/machine-learning)
I've solved the training using graphlab but wanted to try out sklearn for the experience and learning. I'm using sklearn and pandas for this.
The model overfits on the data. How can I fix this? This is the code.
These are the coefficients i'm getting.
[ -3.33628603e-13 1.00000000e+00]
poly1_data = polynomial_dataframe(sales["sqft_living"], 1)
poly1_data["price"] = sales["price"]
model1 = LinearRegression()
model1.fit(poly1_data, sales["price"])
print(model1.coef_)
plt.plot(poly1_data['power_1'], poly1_data['price'], '.',poly1_data['power_1'], model1.predict(poly1_data),'-')
plt.show()
The plotted line is like this. As you see it connects every data point.
and this is the plot of the input data
I wouldn't even call this overfit. I'd say you aren't doing what you think you should be doing. In particular, you forgot to add a column of 1's to your design matrix, X. For example:
# generate some univariate data
x = np.arange(100)
y = 2*x + x*np.random.normal(0,1,100)
df = pd.DataFrame([x,y]).T
df.columns = ['x','y']
You're doing the following:
model1 = LinearRegression()
X = df["x"].values.reshape(1,-1)[0] # reshaping data
y = df["y"].values.reshape(1,-1)[0]
model1.fit(X,y)
Which leads to:
plt.plot(df['x'].values, df['y'].values,'.')
plt.plot(X[0], model1.predict(X)[0],'-')
plt.show()
Instead, you want to add a column of 1's to your design matrix (X):
X = np.column_stack([np.ones(len(df['x'])),df["x"].values.reshape(1,-1)[0]])
y = df["y"].values.reshape(1,-1)
model1.fit(X,y)
And (after some reshaping) you get:
plt.plot(df['x'].values, df['y'].values,'.')
plt.plot(df['x'].values, model1.predict(X),'-')
plt.show()

Create color histogram of an image using tensorflow

Is there a neat way to compute a color histogram of an image? Maybe by abusing the internal code of tf.histogram_summary? From what I've seen, this code is not very modular and calls directly some C++ code.
Thanks in advance.
I would use tf.unsorted_segment_sum, where the "segment IDs" are computed from the color values and the thing you sum is a tf.ones vector. Note that tf.unsorted_segment_sum is probably better thought of as "bucket sum". It implements dest[segment] += thing_to_sum -- exactly the operation you need for a histogram.
In slightly pseudocode (meaning I haven't run this):
binned_values = tf.reshape(tf.floor(img_r * (NUM_BINS-1)), [-1])
binned_values = tf.cast(binned_values, tf.int32)
ones = tf.ones_like(binned_values, dtype=tf.int32)
counts = tf.unsorted_segment_sum(ones, binned_values, NUM_BINS)
You could accomplish this in one pass instead of separating out the r, g, and b values with a split if you wanted to cleverly construct your "ones" to look like "100100..." for red, "010010" for green, etc., but I suspect it would be slower overall, and harder to read. I'd just do the split that you proposed above.
This is what I'm using right now:
# Assumption: img is a tensor of the size [img_width, img_height, 3], normalized to the range [-1, 1].
with tf.variable_scope('color_hist_producer') as scope:
bin_size = 0.2
hist_entries = []
# Split image into single channels
img_r, img_g, img_b = tf.split(2, 3, img)
for img_chan in [img_r, img_g, img_b]:
for idx, i in enumerate(np.arange(-1, 1, bin_size)):
gt = tf.greater(img_chan, i)
leq = tf.less_equal(img_chan, i + bin_size)
# Put together with logical_and, cast to float and sum up entries -> gives count for current bin.
hist_entries.append(tf.reduce_sum(tf.cast(tf.logical_and(gt, leq), tf.float32)))
# Pack scalars together to a tensor, then normalize histogram.
hist = tf.nn.l2_normalize(tf.pack(hist_entries), 0)
tf.histogram_fixed_width
might be what you are looking for...
Full documentation on
https://www.tensorflow.org/api_docs/python/tf/histogram_fixed_width