I am trying to predict my uncompleted distribution function tail with scipy.interpolate.interp1d, but ı can not get expected results, ı have tried some parameters of the scipy function but it failed. The below graph may be show my real aim, ı want to see finished graph like yellow line, Do you have any advise?
enter image description here
Related
XGBoost native API's plot_importance displays an unknown feature, unnamed: 0, on the top of the chart.
Here is the output image.
Feature Importance Ranking
I checked all the columns in the original dataframe input into DMatrix and confirmed that there is no unknown feature left in it. I also removed the key ID as well.
So, I confirmed that the original dataset did not include any unspecified feature in its columns.
My code of plot_importance is here.
`
plot_importance(pw_model_1, max_num_features=10)
pyplot.savefig('plot.png')
pyplot.show()
`
Here pw_model_1 is the selected model after hyperparameter tuning.
I would appreciate it if anyone can advise me how to resolve this issue.
Thank you
Best regards
Michio
We want to correlate the StepCount with the trends of Precision and Accuracy. The Tensorboard GUI allows this: but we want to automate the results. The printed results do not print the Step Count - at least not by default:
Is there a way to tease the Step Count from Tensorboard's printed results?
The steps are actually available in the TF Events file but require coding to extract and correlate. I show how to do that here: How to parse the tensorflow events file?
I am building an object detector in TensorFlow to detect, motorbike riders with and without helmet, I have 1000 Images each for riders with helmet, withouthelmet and pedestrians(pu together -- 3000 IMAGES), My last checkpoint was 35267 steps, I have tested using a traffic video, but I see unusally large bounding boxes with wrong results. Can someone please explain the reason for such detections? Do I need to wait for atleast 50000 steps?? or Do I need to add datasets(Images in the angle to Traffic Cameras)?
Model - SSD Mobilenet COCO - Custom Object Detection,
Training Platform - Google Colab
Please find the Images attachedVideo Snapshot 1
Video Snapshot 2
Day 2 - 10/30/2018
I have tested with Images today, I have got different results, seems to be correct,2nd Day if I test with single object in a Image. Please find the results
Single Object IMage Test 1
Single Object Image Test 2
Tested CHeckpoint - 52,000 Steps
But, If I test with the Images with multiple objects in a road, the detection is wrong and bounding boxes are weirdly bigger, Is it because of the dataset, as I am training with One Motorbike rider(with or with out helmet) per image.
Please find the wrong results
Multi Object Image Test
Multi Object Image Test
I had also tested with images like all Motorbikes in the scene, In this case, I did not get any results, Please find the Images
No Result Image
No Result Image
The results are very confusing, Is there anything I am missing?,
There is no need to wait till 50000 epocs you should get decent result in 35k or even in 10k. I would suggest
go through you data-set again and check all the bounding boxes (data cleaning)
Check your model with inference code for changes like batch normalization etc
Add some more data with different features, angles and color complexities
I would check these points before going further.
I'm trying to get the loss from a test image in Faster R-CNN.
If I run copy.copy(trainer.previous_minibatch_loss_average) right after trainer.train_minibatch(data) then I can get the loss out for the trained image(mb=1).
When I try to do the exact same after trainer.test_minibatch(data) I get: This Value object is invalid and can no longer be accessed.
I've been looking around and it seems that other may have accomplish with something similar. Here.
Anyone know what to do to get the loss of a test image?
results = []
results.append(trainer.previous_minibatch_loss_average)
The above should work.
I'm using tensorboard (tensorflow 1.1.0) to show the result of my CNN classifier.
I added some output vector as tf.summary.histogram in order to show the counts of output in each bin, but tensorboard seems to automatically compute interpolation and show them as (somehow) smoothed distribution
(and therefore I can not find the exact counts for the bins).
Could someone tell me how can I avoid the interpolation and show usual histograms using bars?
I not sure that there is easy way to do it.
I very unsure in below text, correct me if I wrong.
From this file https://github.com/tensorflow/tensorboard/blob/master/tensorboard/plugins/histogram/vz_histogram_timeseries/index.html it seems that histogram comes to tensorboard in double values.
Summary op uses either histogram from https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/python/ops/histogram_ops.py (1) or https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/core/lib/histogram/histogram.cc (2)
I suppose that it uses 2nd because here https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/python/summary/summary.py#L189 it calls function from generated file. In my package code in this generated file there is another function call:
result = _op_def_lib.apply_op("HistogramSummary", tag=tag, values=values,
name=name)
I have grep all repo and seems like there is no other python code which define something with "HistogramSummary", so it seems like it's really defined here https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/core/kernels/summary_op.cc and this code uses code mentioned above (2).
So, it seems to me that histogram which is used now is buried deep inside of framework and I not sure that it's easy to rewrite it.
In this page there is email for support https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/summary . I suppose that it's better to contact this person or make issue on github.