Computing Bounding Boxes from a Mask-Image (Tensorflow or other) - tensorflow

I'm looking for ways to convert a mask (a Height x Width boolean image) into a series of bounding boxes (see example picture below, which I hand-drew), with boxes encircling the "islands of truth".
Specifically, I'm looking for a way that would work with standard TensorFlow ops (though all input is welcome). I want this so I can convert the model to TFLite without adding custom ops and recompiling from source. But in general it would just be nice to be aware of different ways of doing this.
Notes:
I already have a solution involving non-standard Tensorflow, based on tfa.image.connected_components (see solution here). However that op is not included in Tensorflow Lite. It also feels like it does something slightly harder than necessary (finding connected components feels harder than just outlining blobs on an image without worrying about whether they are connected or not)
I know I haven't specified here exactly how I'd like the boxes generated (e.g whether separate "ying-yang-style" connected components should have separate boxes even if they overlap, etc). Really I'm not worried about the details, just that the resulting boxes look "reasonable".
Some related questions (please read before flagging as duplicate!):
Converting a binary mask into a bounding box in tensorflow asks about creating a single bounding box, which is significantly easier.
Generating bounding boxes from heatmap data (similar, but asks the slightly broader question of converting from "heatmap", and does not specify Tensorflow).
Create Bounding Boxes from Image Labels assumes the image has already been segmented into components (called "labels" there)
I'm ideally looking for something that does not need training (e.g. YOLO-style regression) and just works out of the box (heh).
Edit Here is an example mask image: https://github.com/petered/data/blob/master/images/example_mask3.png which can be loaded into a mask with
mask = cv2.imread(os.path.expanduser('~/Downloads/example_mask3.png')).mean(axis=2) > 50

Well, not sure if this is doable with just tensorflow ops, but here is a Python/Numpy implementation (which uses a very inefficient double-for loop). In principle, it should be fast if vectorized (again, not sure if possible) or written in C, because it just does 2 passes over the pixels to compute the boxes.
I'm not sure if this algorithm has an existing name, but if not I would call it Downright Boxing because it involves extending the mask-segments down and to the right in order to find boxes.
Here's the result on the mask in the question (with a few extra shapes added as examples):
def mask_to_boxes(mask: Array['H,W', bool]) -> Array['N,4', int]:
""" Convert a boolean (Height x Width) mask into a (N x 4) array of NON-OVERLAPPING bounding boxes
surrounding "islands of truth" in the mask. Boxes indicate the (Left, Top, Right, Bottom) bounds
of each island, with Right and Bottom being NON-INCLUSIVE (ie they point to the indices AFTER the island).
This algorithm (Downright Boxing) does not necessarily put separate connected components into
separate boxes.
You can "cut out" the island-masks with
boxes = mask_to_boxes(mask)
island_masks = [mask[t:b, l:r] for l, t, r, b in boxes]
"""
max_ix = max(s+1 for s in mask.shape) # Use this to represent background
# These arrays will be used to carry the "box start" indices down and to the right.
x_ixs = np.full(mask.shape, fill_value=max_ix)
y_ixs = np.full(mask.shape, fill_value=max_ix)
# Propagate the earliest x-index in each segment to the bottom-right corner of the segment
for i in range(mask.shape[0]):
x_fill_ix = max_ix
for j in range(mask.shape[1]):
above_cell_ix = x_ixs[i-1, j] if i>0 else max_ix
still_active = mask[i, j] or ((x_fill_ix != max_ix) and (above_cell_ix != max_ix))
x_fill_ix = min(x_fill_ix, j, above_cell_ix) if still_active else max_ix
x_ixs[i, j] = x_fill_ix
# Propagate the earliest y-index in each segment to the bottom-right corner of the segment
for j in range(mask.shape[1]):
y_fill_ix = max_ix
for i in range(mask.shape[0]):
left_cell_ix = y_ixs[i, j-1] if j>0 else max_ix
still_active = mask[i, j] or ((y_fill_ix != max_ix) and (left_cell_ix != max_ix))
y_fill_ix = min(y_fill_ix, i, left_cell_ix) if still_active else max_ix
y_ixs[i, j] = y_fill_ix
# Find the bottom-right corners of each segment
new_xstops = np.diff((x_ixs != max_ix).astype(np.int32), axis=1, append=False)==-1
new_ystops = np.diff((y_ixs != max_ix).astype(np.int32), axis=0, append=False)==-1
corner_mask = new_xstops & new_ystops
y_stops, x_stops = np.array(np.nonzero(corner_mask))
# Extract the boxes, getting the top-right corners from the index arrays
x_starts = x_ixs[y_stops, x_stops]
y_starts = y_ixs[y_stops, x_stops]
ltrb_boxes = np.hstack([x_starts[:, None], y_starts[:, None], x_stops[:, None]+1, y_stops[:, None]+1])
return ltrb_boxes

Related

OpenCV detect blobs on the image

I need to find (and draw rect around)/get max and min radius blobs on the image. (samples below)
the problem is to find correct filters for the image that will allow Canny or Threshold transformation to highlight the blobs. then I going to use findContours to find the rectangles.
I've tryed:
Threshold - with different level
blur->erode->erode->grayscale->canny
change image tone with variety of "lines"
and ect. the better result was to detect piece (20-30%) of blob. and this info not allowed to draw rect around blob. also, thanks for shadows, not related to blob dots were detected, so that also prevents to detect the area.
as I understand I need to find counter that has hard contrast (not smooth like in shadow). Is there any way to do that with openCV?
Update
cases separately: image 1, image 2, image 3, image 4, image 5, image 6, image 7, image 8, image 9, image 10, image 11, image 12
One more Update
I believe that the blob have the contrast area at the edge. So, I've tried to make edge stronger: I've created 2 gray scale Mat: A and B, apply Gaussian blur for the second one - B (to reduce noise a bit), then I've made some calculations: goes around every pixel and find max difference between Xi,Yi of 'A' and nearby dots from 'B':
and apply max difference to Xi,Yi. so I get smth like this:
is i'm on the right way? btw, can I reach smth like this via OpenCV methods?
Update Image Denoising helps to reduce noize, Sobel - to highlight the contours, then threshold + findContours and custome convexHull gets smth similar I'm looking for but it not good for some blobs.
Since there are big differences between the input images, the algorithm should be able to adapt to the situation. Since Canny is based on detecting high frequencies, my algorithm treats the sharpness of the image as the parameter used for preprocessing adaptation. I didn't want to spend a week figuring out the functions for all the data, so I applied a simple, linear function based on 2 images and then tested with a third one. Here are my results:
Have in mind that this is a very basic approach and is only proving a point. It will need experiments, tests, and refining. The idea is to use Sobel and sum over all the pixels acquired. That, divided by the size of the image, should give you a basic estimation of high freq. response of the image. Now, experimentally, I found values of clipLimit for CLAHE filter that work in 2 test cases and found a linear function connecting the high freq. response of the input with a CLAHE filter, yielding good results.
sobel = get_sobel(img)
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557
That's the adaptive part. Now for the contours. It took me a while to figure out a correct way of filtering out the noise. I settled for a simple trick: using contours finding twice. First I use it to filter out the unnecessary, noisy contours. Then I continue with some morphological magic to end up with correct blobs for the objects being detected (more details in the code). The final step is to filter bounding rectangles based on the calculated mean, since, on all of the samples, the blobs are of relatively similar size.
import cv2
import numpy as np
def unsharp_mask(img, blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
gaussian = cv2.GaussianBlur(img, (5,5), 0)
return cv2.addWeighted(img, imgWeight, gaussian, gaussianWeight, 0)
def smoother_edges(img, first_blur_size, second_blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
img = cv2.GaussianBlur(img, first_blur_size, 0)
return unsharp_mask(img, second_blur_size, imgWeight, gaussianWeight)
def close_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
def open_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
def shrink_rect(rect, scale = 0.8):
center, (width, height), angle = rect
width = width * scale
height = height * scale
rect = center, (width, height), angle
return rect
def clahe(img, clip_limit = 2.0):
clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=(5,5))
return clahe.apply(img)
def get_sobel(img, size = -1):
sobelx64f = cv2.Sobel(img,cv2.CV_64F,2,0,size)
abs_sobel64f = np.absolute(sobelx64f)
return np.uint8(abs_sobel64f)
img = cv2.imread("blobs4.jpg")
# save color copy for visualizing
imgc = img.copy()
# resize image to make the analytics easier (a form of filtering)
resize_times = 5
img = cv2.resize(img, None, img, fx = 1 / resize_times, fy = 1 / resize_times)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# use sobel operator to evaluate high frequencies
sobel = get_sobel(img)
# experimentally calculated function - needs refining
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557
# don't apply clahe if there is enough high freq to find blobs
if(clip_limit < 1.0):
clip_limit = 0.1
# limit clahe if there's not enough details - needs more tests
if(clip_limit > 8.0):
clip_limit = 8
# apply clahe and unsharp mask to improve high frequencies as much as possible
img = clahe(img, clip_limit)
img = unsharp_mask(img)
# filter the image to ensure edge continuity and perform Canny
# (values selected experimentally, using trackbars)
img_blurred = (cv2.GaussianBlur(img.copy(), (2*2+1,2*2+1), 0))
canny = cv2.Canny(img_blurred, 35, 95)
# find first contours
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# prepare black image to draw contours
canvas = np.ones(img.shape, np.uint8)
for c in cnts:
l = cv2.arcLength(c, False)
x,y,w,h = cv2.boundingRect(c)
aspect_ratio = float(w)/h
# filter "bad" contours (values selected experimentally)
if l > 500:
continue
if l < 20:
continue
if aspect_ratio < 0.2:
continue
if aspect_ratio > 5:
continue
if l > 150 and (aspect_ratio > 10 or aspect_ratio < 0.1):
continue
# draw all the other contours
cv2.drawContours(canvas, [c], -1, (255, 255, 255), 2)
# perform closing and blurring, to close the gaps
canvas = close_image(canvas, (7,7))
img_blurred = cv2.GaussianBlur(canvas, (8*2+1,8*2+1), 0)
# smooth the edges a bit to make sure canny will find continuous edges
img_blurred = smoother_edges(img_blurred, (9,9))
kernel = np.ones((3,3), np.uint8)
# erode to make sure separate blobs are not touching each other
eroded = cv2.erode(img_blurred, kernel)
# perform necessary thresholding before Canny
_, im_th = cv2.threshold(eroded, 50, 255, cv2.THRESH_BINARY)
canny = cv2.Canny(im_th, 11, 33)
# find contours again. this time mostly the right ones
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# calculate the mean area of the contours' bounding rectangles
sum_area = 0
rect_list = []
for i,c in enumerate(cnts):
rect = cv2.minAreaRect(c)
_, (width, height), _ = rect
area = width*height
sum_area += area
rect_list.append(rect)
mean_area = sum_area / len(cnts)
# choose only rectangles that fulfill requirement:
# area > mean_area*0.6
for rect in rect_list:
_, (width, height), _ = rect
box = cv2.boxPoints(rect)
box = np.int0(box * 5)
area = width * height
if(area > mean_area*0.6):
# shrink the rectangles, since the shadows and reflections
# make the resulting rectangle a bit bigger
# the value was guessed - might need refinig
rect = shrink_rect(rect, 0.8)
box = cv2.boxPoints(rect)
box = np.int0(box * resize_times)
cv2.drawContours(imgc, [box], 0, (0,255,0),1)
# resize for visualizing purposes
imgc = cv2.resize(imgc, None, imgc, fx = 0.5, fy = 0.5)
cv2.imshow("imgc", imgc)
cv2.imwrite("result3.png", imgc)
cv2.waitKey(0)
Overall I think that's a very interesting problem, a little bit too big to be answered here. The approach I presented is due to be treated as a road sign, not a complete solution. Tha basic idea being:
Adaptive preprocessing.
Finding contours twice: for filtering and then for the actual classification.
Filtering the blobs based on their mean size.
Thanks for the fun and good luck!
Here is the code I used:
import cv2
from sympy import Point, Ellipse
import numpy as np
x1='C:\\Users\\Desktop\\python\\stack_over_flow\\XsXs9.png'
image = cv2.imread(x1,0)
image1 = cv2.imread(x1,1)
x,y=image.shape
median = cv2.GaussianBlur(image,(9,9),0)
median1 = cv2.GaussianBlur(image,(21,21),0)
a=median1-median
c=255-a
ret,thresh1 = cv2.threshold(c,12,255,cv2.THRESH_BINARY)
kernel=np.ones((5,5),np.uint8)
dilation = cv2.dilate(thresh1,kernel,iterations = 1)
kernel=np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(dilation, cv2.MORPH_OPEN, kernel)
cv2.imwrite('D:\\test12345.jpg',opening)
ret,contours,hierarchy = cv2.findContours(opening,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
c=np.size(contours[:])
Blank_window=np.zeros([x,y,3])
Blank_window=np.uint8(Blank_window)
for u in range(0,c-1):
if (np.size(contours[u])>200):
ellipse = cv2.fitEllipse(contours[u])
(center,axes,orientation) =ellipse
majoraxis_length = max(axes)
minoraxis_length = min(axes)
eccentricity=(np.sqrt(1-(minoraxis_length/majoraxis_length)**2))
if (eccentricity<0.8):
cv2.drawContours(image1, contours, u, (255,1,255), 3)
cv2.imwrite('D:\\marked.jpg',image1)
Here problem is to find a near circular object. This simple solution is based on finding the eccentricity for each and every contour. Such objects being detected is the drop of water.
I have a partial solution in place.
FIRST
I initially converted the image to the HSV color space and tinkered with the value channel. On doing so I came across something unique. In almost every image, the droplets have a tiny reflection of light. This was highlighted distinctly in the value channel.
Upon inverting this I was able to obtain the following:
Sample 1:
Sample 2:
Sample 3:
SECOND
Now we have to extract the location of those points. To do so I performed anomaly detection on the inverted value channel obtained. By anomaly I mean the black dot present in them.
In order to do this I calculated the median of the inverted value channel. I allotted pixel value within 70% above and below the median to be treated as normal pixels. But every pixel value lying beyond this range to be anomalies. The black dots fit perfectly there.
Sample 1:
Sample 2:
Sample 3:
It did not turn out well for few images.
As you can see the black dot is due to the reflection of light which is unique to the droplets of water. Other circular edges might be present in the image but the reflection distinguishes the droplet from those edges.
THIRD
Now since we have the location of these black dots, we can perform Difference of Gaussians (DoG) (also mentioned in the update of the question) and obtain relevant edge information. If the obtained location of the black dots lie within the edges discovered it is said to be a water droplet.
Disclaimer: This method does not work for all the images. You can add your suggestions to this.
Good day , I am working on this subject and my advice to you is; First, after using many denoising filters such as Gaussian filters, process the image after that.
You can blob-detection these circles not with countors.

Kinetic Theory Model

Edit: I've now fixed the problem I asked about. The spheres were leaving the box in the corners, where the if statements (in the while loop shown below) got confused. In the bits of code that reverse the individual components of velocity on contact with walls, some elif statements were used. When elif is used (as far as I can tell) if the sphere exceeds more than one position limit at a time, the program only reverses the velocity component for one of them. This is rectified when replacing elif with simply if. I'm not sure if I quite understand the reason behind this, so hopefully someone cleverer than I will comment such information, but for now, if anyone has the same problem, I hope my limited input helps!
Some context first:
I'm trying to build a model of the kinetic theory of gases in VPython, as a revision exercise for my (Physics) degree. This involves me building a hollow box and putting a bunch of spheres in it, randomly positioned throughout the box. I then need to assign each of the spheres its own random velocity and then use a loop to adjust the position of each sphere with reference to its velocity vector and a time step.
The spheres should also undergo elastic collisions with each wall and all other spheres.
When a sphere meets a wall in the x-direction, its x-velocity component is reversed and similarly in the y and z directions.
When a sphere meets another sphere, they swap velocities.
Currently, my code works so far as creating the right number of spheres and distributing them randomly and giving each sphere its own random velocity. The spheres also move as they should, except for collisions. The spheres should all stay inside the box as they should bounce off all the walls. They appear to be bouncing off each other, however, occasionally a sphere or two will go straight through the box.
I am extremely new to programming and I don't quite understand what's going on here or why it's happening but I'd be very grateful if someone could help me.
Below is the code I have so far (I've tried to comment what I'm doing at each step):
##########################################################
# This code is meant to create an empty box and then create
# a certain number of spheres (num_spheres) that will sit inside
# the box. Each sphere will then be assigned a random velocity vector.
# A loop will then adjust the position of each sphere to make them
# move. The spheres will undergo elastic collisions with the box walls
# and also with the other spheres in the box.
##########################################################
from visual import *
import random as random
import numpy as np
num_spheres = 15
fps = 24 #fps of while loop (later)
dt = 1.0/fps #time step
l = 40 #length of box
w = 2 #width of box
radius = 0.5 #radius of spheres
##########################################################
# Creating an empty box with sides length/height l, width w
wallR = box(pos = (l/2.0,0,0), size=(w,l,l), color=color.white, opacity=0.25)
wallL = box(pos = (-l/2.0,0,0), size=(w,l,l), color=color.white, opacity=0.25)
wallU = box(pos = (0,l/2.0,0), size=(l,w,l), color=color.white, opacity=0.25)
wallD = box(pos = (0,-l/2.0,0), size=(l,w,l), color=color.white, opacity=0.25)
wallF = box(pos = (0,0,l/2.0), size=(l,l,w), color=color.white, opacity=0.25)
wallB = box(pos = (0,0,-l/2.0), size=(l,l,w), color=color.white, opacity=0.25)
#defining a function that creates a list of 'num_spheres' randomly positioned spheres
def create_spheres(num):
global l, radius
particles = [] # Create an empty list
for i in range(0,num): # Loop i from 0 to num-1
v = np.random.rand(3)
particles.append(sphere(pos= (3.0/4.0*l) * (v - 0.5), #pos such that spheres are inside box
radius = radius, color=color.red, index=i))
# each sphere is given an index for ease of referral later
return particles
#defining a global variable = the array of velocities for the spheres
velarray = []
#defining a function that gives each sphere a random velocity
def velocity_spheres(sphere_list):
global velarray
for sphere in spheres:
#making the sign of each velocity component random
rand = random.randint(0,1)
if rand == 1:
sign = 1
else:
sign = -1
mu = 10 #defining an average for normal distribution
sigma = 0.1 #defining standard deviation of normal distribution
# 3 random numbers form the velocity vector
vel = vector(sign*random.normalvariate(mu, sigma),sign*random.normalvariate(mu, sigma),
sign*random.normalvariate(mu, sigma))
velarray.append(vel)
spheres = create_spheres(num_spheres) #creating some spheres
velocity_spheres(spheres) # invoking the velocity function
while True:
rate(fps)
for sphere in spheres:
sphere.pos += velarray[sphere.index]*dt
#incrementing sphere position by reference to its own velocity vector
if abs(sphere.pos.x) > (l/2.0)-w-radius:
(velarray[sphere.index])[0] = -(velarray[sphere.index])[0]
#reversing x-velocity on contact with a side wall
elif abs(sphere.pos.y) > (l/2.0)-w-radius:
(velarray[sphere.index])[1] = -(velarray[sphere.index])[1]
#reversing y-velocity on contact with a side wall
elif abs(sphere.pos.z) > (l/2.0)-w-radius:
(velarray[sphere.index])[2] = -(velarray[sphere.index])[2]
#reversing z-velocity on contact with a side wall
for sphere2 in spheres: #checking other spheres
if sphere2 != sphere:
#making sure we aren't checking the sphere against itself
if abs(sphere2.pos-sphere.pos) < (sphere.radius+sphere2.radius):
#if the other spheres are touching the sphere we are looking at
v1 = velarray[sphere.index]
#noting the velocity of the first sphere before the collision
velarray[sphere.index] = velarray[sphere2.index]
#giving the first sphere the velocity of the second before the collision
velarray[sphere2.index] = v1
#giving the second sphere the velocity of the first before the collision
Thanks again for any help!
The elif statements within the while loop in the code given in the original question are/were the cause of the problem. The conditional statement, elif, is only applicable if the original, if, condition is not satisfied. The circumstance wherein a sphere meets the corner of the box satisfies at least two of the conditions for reversing velocity components. This means that, while one would expect (at least) two velocity components to be reversed, only one is. That is, the direction specified by the if statement is reversed, whereas the component(s) mentioned in the elif statement(s) are not, as the first condition has been satisfied and, hence, the elif statements are ignored.
If each elif is changed to be a separate if statement, the code works as intended.

Recursively change black to white in image with numpy

What's the best way to remove large shadowed regions from greyscaled images. I'm struggling to write a method that takes a 2d Numpy array A and an entry (x,y) in A, and "crawls" through the array changing any (x',y') entry "connected" to (x,y) from 0 to 255. What I mean by connected is there's some path of 0 valued entries from (x,y) to (x',y'). Here's a picture of what I mean.
The black region at the bottom should all be set to grayscale 255. I'm almost positive this algorithm should be recursive, is there a fast way to do this in numpy, or using PIL?
EDIT:
OK thanks for the advice, here's what I've been able to come up with;
def creep(data, x, y):
data[x, y]=255
for (i,j) in [(1,0),(-1,0),(0,1),(0,-1)]:
x, y = x + i, y + j
try:
if data[x, y]==0:
return creep(data, x, y)
except:
pass
return data
def crop_big_region(data):
""" Looks for black regions in image and makes them white """
n, m = data.shape
r = int(0.012*min(n,m))
num_samples = int(0.0001*n*m)
for _ in xrange(0,num_samples):
x, y = numpy.random.randint(r,n - r), numpy.random.randint(r,m -r)
if numpy.all(data[x-r:x+r, y-r:y+r] == 0):
data[x,y] = 255
data = creep(data, x, y)
return data
It seems to sort of work, except it just returns lines, instead of filling out the entire region.
Think I'm just too tired to figure out the recursive step here properly.
As #Boaz pointed out is more an image processing question than a python question. You can achieve the desired result using the so-called adaptive thresholding. Scikits-image has a nice implementation available, with a complete tutorial here:
http://scikit-image.org/docs/dev/auto_examples/plot_threshold_adaptive.html
You will need to tune it a bit, but it should work.
Well - it is not really "python related question" more of an image processing question.
I think you can have 2 options:
1. Use edge detection, and then iterate to see that the area is big enough
look at Image outline using python/PIL for example.
2. Use OpenCV. Don't know it well - but this code seems to do something rather close to what you are trying to achieve
Python: detect black squares

Contours based on a "label mask"

I have images that have had features extracted with a contouring algorithm (I'm doing astrophysical source extraction). This approach yields a "feature map" that has each pixel "labeled" with an integer (usually ~1000 unique features per map).
I would like to show each individual feature as its own contour.
One way I could accomplish this is:
for ii in range(labelmask.max()):
contour(labelmask,levels=[ii-0.5])
However, this is very slow, particularly for large images. Is there a better (faster) way?
P.S.
A little testing showed that skimage's find-contours is no faster.
As per #tcaswell's comment, I need to explain why contour(labels, levels=np.unique(levels)+0.5)) or something similar doesn't work:
1. Matplotlib spaces each subsequent contour "inward" by a linewidth to avoid overlapping contour lines. This is not the behavior desired for a labelmask.
2. The lowest-level contours encompass the highest-level contours
3. As a result of the above, the highest-level contours will be surrounded by a miniature version of whatever colormap you're using and will have extra-thick contours compared to the lowest-level contours.
Sorry for answering my own... impatience (and good luck) got the better of me.
The key is to use matplotlib's low-level C routines:
I = imshow(data)
E = I.get_extent()
x,y = np.meshgrid(np.linspace(E[0],E[1],labels.shape[1]), np.linspace(E[2],E[3],labels.shape[0]))
for ii in np.unique(labels):
if ii == 0: continue
tracer = matplotlib._cntr.Cntr(x,y,labels*(labels==ii))
T = tracer.trace(0.5)
contour_xcoords,contour_ycoords = T[0].T
# to plot them:
plot(contour_xcoords, contour_ycoords)
Note that labels*(labels==ii) will put each label's contour at a slightly different location; change it to just labels==ii if you want overlapping contours between adjacent labels.

ValueError: setting an array element with a sequence at fit(X, y) in k-nearest neighbor

i have an error at this line:neigh.fit(X, y) :
ValueError: setting an array element with a sequence.
I checked fit function and X is: {array-like, sparse matrix, BallTree, cKDTree}
My X is a list of list with first element solidity number and second elemnt humoment list (7 cells).
If i change and i take only first humoment number for having a pure list of list
give this error: query data dimension must match BallTree data dimension.
My code:
listafeaturevector = list()
path = 'imgknn/'
for infile in glob.glob( os.path.join(path, '*.jpg') ):
print("current file is: " + infile )
gray = cv2.imread(infile,0)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(6,6))
graydilate = cv2.erode(gray, element)
ret,thresh = cv2.threshold(graydilate,127,255,cv2.THRESH_BINARY_INV)
imgbnbin = thresh
#CONTOURS
contours, hierarchy = cv2.findContours(imgbnbin, cv2.RETR_TREE ,cv2.CHAIN_APPROX_SIMPLE)
print(len(contours))
for i in range (0, len(contours)):
fv = list() #1 feature vector
#HUMOMENTS
#print("humoments")
mom = cv2.moments(contours[i], 1)
Humoments = cv2.HuMoments(mom)
#print(Humoments)
fv.append(Humoments) #query data dimension must match BallTree data dimension
#SOLIDITY
area = cv2.contourArea(contours[i])
hull = cv2.convexHull(contours[i]) #ha tanti valori
hull_area = cv2.contourArea(hull)
solidity = float(area)/hull_area
fv.append(solidity)
#fv.append(elongation)
listafeaturevector.append(fv)
print("i have done")
print(len(listafeaturevector))
lenmatrice=len(listafeaturevector)
#KNN
X = listafeaturevector
y = [0,1,2,3]* (lenmatrice/4)
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X, y) #ValueError: setting an array element with a sequence.
print(neigh.predict([[1.1]]))
print(neigh.predict_proba([[0.9]]))
If i try to covert it in a numpy array:
listafv = np.dstack(listafeaturevector)
listafv=np.rollaxis(listafv,-1)
print(listafv.shape)
data = listafv.reshape((lenmatrice, -1))
print(data.shape)
#KNN
X = data
i got: setting an array element with a sequence
A couple of suggestions/questions:
Humoments = cv2.HuMoments(mom)
What is the class of the return value Humoments? a float or a list? If float, that is fine.
for each image file
for i in range (0, len(contours)):
fv = list() #1 feature vector
...
fv.append(Humoments)
...
fv.append(solidity)
listafeaturevector.append(fv)
The above code does not seem correct. In your problem, I think you need to a construct a feature vector for each image. So anything that is related to image i should go to the same feature vector x_i. Then you combine all feature vectors to get a list of feature vectors X. However, your listafeaturevector (or X) presents in the inner-most loop, it's obviously not correct.
Second, you have a loop against the number of elements in the contours, are you sure the number of elements stays the same for each image? Otherwise, the number of features (|x_i|) is totally different across different images, that might cause the error of
setting an array element with a sequence.
Third, are you clear about how you want to classify the images? what are the target values/labels of different images? I see you just setting labels with [0,1,2,3]* (lenmatrice/4). Can you elaborate on what you are trying to do with those images? Are they containing different type of object? Are they showing different patterns? Are those images describe different topic/color? If yes, for each different type, you give a different label - either 0,1,2 or 'red','white','black' (assume you have only 3 types). The values of the label do not matter. What matters is how many values they have. I am trying to understand the difference of labels in your case.
On the other hand, if you only want to retrieve similar images, you don't need to use a classifier or specify a label for each image. Instead, try to use NearestNeighbors.
print(neigh.predict([[1.1]]))
print(neigh.predict_proba([[0.9]]))
Fourth, the above two lines of test are not correct. You need to set an X-like object in order to get a prediction from the classifier. That is to say, you need a feature vector x with the identical structure as you constructed in your training examples (with all h,e,s in the same order).