I am trying to build a graph which shows accuracy and loss curves using Matplotlib but it is not displaying curves.
here's what it shows
Related
I've trained a Kaggle dataset (this one) to detect hand gestures. when training, it gives the val_accuracy = 1.00, here is an image or you can see it using the
link to colab
when I try to test the model using an image from the dataset, it gives right predictions, but when I try to use real-world image for "ok" gesture (you can see it in the end of the colab project), it just gives wrong outputs, I've tries other images, it gives also wrong predictions.
any help please?
When you have a real world image you want to predict you must process that image in exactly the same as you processed the training images. For example
image size must be the same
pixels must be scaled the same
if trained on rgb images real world image must be an rgb image
if trained on grayscale real world image must be gray scale
I found some bayesian paper try to draw the scatter plot matrix for the parameters.
I just wondering what is the goal to draw this scatter plot matrix?
what is the meaning if I see some linear or nonlinear relationships between samples.
Thanks
Few weeks ago when I was preparing the dataset, the visualization of MNIST images were in gray-scale even with out using cmap='Greys'. But now the images are displayed in different color if cmap is not used(image shown below).
So I am a bit confused on whats going on.
Is it normal? if not what can I do to bring the images to their normal form?
Preview of the visualization
I have to visualize the interactive 3D plot on tensorboard. Can the tensorboard visualize this or is there any way to display this on tensorboard.
Thank you.
Yes, you can use the mesh plugin on TensorBoard. It'll allow you to create a visualization similar to those found on Three.js . You pass in the vertices, colors, and faces of the 3D data and TensorBoard will create a 3D interactive visualization. There are other options such as projections but those are mainly used for embeddings.
I am working on an object detection problem on my own dataset. I want to figure out the scale and aspect ratio that I should specify in the config file of Faster RCNN provided by Tensorflow object detection api. The first step is image resizer. I am using Fixed shape resizer as it allows batch size of more than 1. I read that this uses bilinear interpolation for downsample and upsample. How to calculate the new ground truth box coordinate after this resizing. Also, once we have the new ground truth box coordinates, how do we calculate the scale and aspect ratio of anchor boxes that can be specified in the config file to improve the localisation loss.