Tensorflow image too small to display results properly - tensorflow2.0

Is there a possible way for me to resize or change the way the results are being displayed for my object detection?
Any help would be greatly appreciated!

I think from your question you are asking how to upsample an image. There are a few ways, the simplest in my opinion is to use Pillow see here: https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.resize
from PIL import Image
im = Image.open("hopper.jpg")
# Provide the target width and height of the image
(width, height) = (im.width * 2, im.height * 2)
im_resized = im.resize((width, height))

Related

Making plot with subfigure created elsewhere (matplotlib)

I have a function "makeGrid(tensor)" that takes as an argument a pytorch tensor of shape (B,3,H,W), and uses ImageGrid to make a figure that displays the batch of figures in a grid.
Now, the model that outputs "tensor", depends on one parameter, "alpha". I would like to include a slider on the figure such that I can modify alpha "live". I am using the "Slider" widget from matplotlib roughly as such :
result = model(tensor)
images,grid = makeGrid(result)
ifig = plt.figure()# Figure with slider
axalpha = ifig.add_axes([0.25, 0.1, 0.65, 0.03])
# How to add the "images" to ifig ???
alpha_slider = Slider(
ax=axsmol,
valmin=-2,
valmax=2,
valinit=1,
)
def update(val):
model.alpha = alpha_slider.val
result= model(img_batch)
images,grid = makeGrid(result)
# Same problem, need to update ifig with new images
alpha_slider.on_changed(update)
plt.show()
So, my main problem is I have no idea how to use the already created figure (images) and/or grid (which is an ImageGrid object, roughly a list of axes afaik) as a subplot of "ifig", the interactive figure which contains slider and images.
Very sorry as this seems to be a basic question, but searching for "how to add already created figure as subplot of figure" or other things didn't yield solutions to my problem (or at least, in my limited point of view).

How to add text into a plot in pyqtgraph like matplotlib.plot.text()

Like the title said, I want to add a text into a graph which I used pyqtgraph to plot, but I didn't find any function like matplotlib.plot.text() which I could set the text and even position in the graph.
self.plt_1.setLabel('left', 'CDF')
self.plt_1.setLabel('bottom', 'Delay', units='ms')
self.plt_1.setXRange(0, 200)
self.plt_1.setYRange(0, 1)
self.plt_1.setWindowTitle('DL CDF Curve')
self.plt_1.setMouseEnabled(x=False, y=False)
self.plt_1.setMenuEnabled(False)
self.plt_1.setText(30, 20, str(self.x_dl_5g_flag))
I tried this, but it doesn't work in my case, does anyone know how to do it in pyqtgraph? thanks
self.text = pg.TextItem(str(self.x_dl_5g_flag))
self.plt_1.addItem(self.text)
self.text.setPos(30,20)
If you want to add text to a graph and define its position on the plot in coordinates relative to the plot canvas (not data coordinates) you can use LabelItem instead of TextItem, something along the following lines (with pyqtgraph imported as pg):
self.text_label = pg.LabelItem("Your text")
self.text_label.setParentItem(self.plt_1.graphicsItem())
self.text_label.anchor(itemPos=(0.4, 0.1), parentPos=(0.4, 0.1))

Imshow differs drastically from applying matplolib.cm to a segmented image

Hi and thanks for reading.
What I am trying to do is to make a web app that would take an image, run it through the model and return a segmented version. I can not use imshow in the webapp though. So I tried adding colormap through matplolib.cm.viridis however it returns a much darker image.
Here are some code and images for refernce:
pred = new_model.predict(np.expand_dims(img, 0))
pred_mask = np.argmax(pred, axis=-1)
pred_mask = pred_mask[0]
This returns me a 2D grayscale image, which when put into matplolib imshow looks like this.(last picture on the right is the output of the model). Code and image below.
axs[0].imshow(m1)
axs[0].set_title('Image')
axs[1].imshow(test_label1)
axs[1].set_title('Ground Truth')
axs[2].imshow(new_pred)
axs[2].set_title('Prediction')
However, when applying colormap to an image using matplolib.cm (something I have to do for app to function) I get this image. Code and image presented below.
Adding colormap. (Viridis, as far as I know is default one from matplolib 3.5)
from matplotlib import cm
pred_mask = cm.viridis(pred_mask / 255)*255
pred_mask = np.asarray(pred_mask, dtype='uint8')
Plotting Image
fig, axs = plt.subplots(1, 3, figsize=(20, 10))
axs[0].imshow(m1)
axs[0].set_title('Image')
axs[1].imshow(test_label1)
axs[1].set_title('Ground Truth')
axs[2].imshow(pred_mask)
axs[2].set_title('Prediction')
But as you can see image is much darker, without even a hint of lighter blue or yellow, i.e. worse. How can I make it closer to imshow output?
PS. Thank you very much for reading and hope that someone has an answer to that. Any suggestions would be much appreciated though.
This is most likely related to the number range of the image or colormap, respectively.
As the prediction mask can be faintly seen my money would be on either multiplying the prediction data with 255 or to set the vmax of imshow to a smaller value. In any case, it would be useful to know the min/max value of pred_mask and additionally show a colorbar for the right plot.
I hope that gets you on the right track.

TensorFlow Binary Classification

I'm trying to make a simple binary image classification with TensorFlow, but the results are just all over the place.
The classifier is supposed to check whether my gate is open or closed. I already have some python scripts to rotate and crop the images to eliminate the surroundings, with an image size of 130w*705h.
Images are below. I know I must be doing something totally wrong, because the images are almost night and day of a difference, yet it still gives completely random results. Any tips? Is there a simpler library or maybe a cloud service I could use for this if TF is too complicated?
Any help is appreciated, thanks!
Gate Closed
Gate Open
Just compute the average grey value of your images and define a threshold. If you want something more sophisticated compute average gradients or something like that. Your problem seems far too simple to use TF or CV.
After taking into consideration Martin's Answer, I decided to go with average grays after some filtering and edge detection.
I think it will work great for my case, thanks!
Some code:
import cv2
import os
import numpy as np
# https://medium.com/sicara/opencv-edge-detection-tutorial-7c3303f10788
inputPath = '/Users/axelsariel/Desktop/GateImages/Cropped/'
# subDir = 'Closed/'
subDir = 'Open/'
openImagesList = os.listdir(inputPath + subDir)
for image in openImagesList:
if not image.endswith('.JPG'):
openImagesList.remove(image)
index = 0
while True:
image = openImagesList[index]
img = cv2.imread(inputPath + subDir + image)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray,11)
grayFiltered = cv2.bilateralFilter(gray, 7, 50, 50)
edgesFiltered = cv2.Canny(grayFiltered, 80, 160)
images = np.hstack((gray, grayFiltered, edgesFiltered))
cv2.imshow(image, images)
key = cv2.waitKey()
if key == 3:
index += 1
elif key == 2:
index -= 1
elif key == ord('q'):
break
cv2.destroyAllWindows()
Average Grays after filtering:
Filtering steps:

How to export svg in matplotlib with correct mm scale

I am trying to export a figure from matplotlib for laser cutting. The figure is plotted with millimeters as the units.
I'm tying to ensure the correct scale by getting the bounding box in inches and then setting the figure size to that value:
import matplotlib.pyplot as plt
ax = plt.subplot(111)
<snipped for brevity...plotting of lines and paths>
x_bound = map(mm_to_inch, ax.get_xbound())
y_bound = map(mm_to_inch, ax.get_ybound())
plt.gcf().set_size_inches(x_bound[1] - x_bound[0], y_bound[1] - y_bound[0])
plt.axis('off')
plt.savefig('{0}.svg'.format(self.name, format='svg'))
The exported .svg is ~2/3rds of the intended scale and I'm not familiar enough with axes and figures to know why. Additionally, there is a black border around the intended geometry. Here is some example output:
.svg output (converted to .png)
How should I remove the black border and scale the .svg correctly?
You probably want to remove the margins around the axes completely,
plt.gcf().subplots_adjust(0,0,1,1)
I might note however that the result may not be precise enough for the application. Definitely also consider creating the figure with a CAD program.
Based off of ImportanceOfBeingErnest's answer and some responses to other stackoverflow questions, the following solution works:
plt.axis('off')
plt.margins(0, 0)
plt.gca().xaxis.set_major_locator(plt.NullLocator())
plt.gca().yaxis.set_major_locator(plt.NullLocator())
plt.subplots_adjust(top=1, bottom=0, right=1, left=0, hspace=0, wspace=0)
x_bound = map(mm_to_inch, self._ax.get_xbound())
y_bound = map(mm_to_inch, self._ax.get_ybound())
plt.gcf().set_size_inches(x_bound[1] - x_bound[0], y_bound[1] - y_bound[0])