Selecting circular region of interest w/ Python - numpy

I am trying to write a program that can circle / mark out 5 distinct regions of interest in an image with a white background. Essentially these are 5 experimental conditions, and ultimately I would like to analyse the intensities of these conditions. 5 circles with varying flourescence levels (red)Another example, but this time with yellow
What I want to achieve is something that can circle / mark out the regions, as seen in the image below. All 5 regions marked out -- I did this manually. I have written some code using cv2, but I haven't been able to obtain desirable results.
import cv2
import numpy as np
experiment = cv2.imread('image.png')
gray = cv2.cvtColor(experiment, cv2.COLOR_BGR2GRAY)
img = cv2.medianBlur(gray, 5)
cimg = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 1, 120, param1 = 100, param2 = 30, minRadius = 0, maxRadius = 0)
circles = np.uint(np.around(circles))
for i in circles[0, :]:
cv2.circle(experiment, (i[0], i[1]), i[2], (0, 255, 0), 2)
cv2.imshow("Detection results", experiment)
cv2.waitKey(0)
cv2.destroyAllWindows()
My results / code output - wrong
For the yellow, only one condition is marked while the others aren't.
For red, only the second and fifth conditions are marked
How should I change my code to ensure all 5 conditions are marked, and what parameters should I change so the circle is strictly within the bounds of the liquid, and no white background is incorporated and will affect my flourescence quantification / analysis?
Additional notes:
1. All the images being analysed will have a white background and 5 distinct liquid drops, so I think HoughCircles can handle this and I don't need any fancy AI to detect the circles?
2. Ultimately I want to have this on a website where users can simply upload their experimental results and their 5 conditions can be circled, isolated and flourescence analysis done with code--- the entire process automated. That's why I don't want to use, for instance, the ROI manager in ImageJ/Fiji because that would require users to do everything manually.

Related

Same line colors on sns.lineplot

I have a task, trying to draw plot where x is year and y is the value of games, released this year. Also, I have information about game platforms. So I want to draw 10 different lines of 10 different colors. I need to do it via for cycle. If I draw it without hue param - lines have different colors, but if I add it, all lines become blue.
Code is on screenshot via the link.
for platform in top10_platofrm_sales_dict:
sns.lineplot(
data=pivot_top_10
.query('platform == #platform and year_of_release !=0 '),
x='year_of_release',
y='name',
hue = 'platform'
)
plt.gcf().set_size_inches(16, 8)

What are the criteria for the weight of deeplab my custom dataset?

I'm training Deeplab v3 by making custom data set in three class, including background
Then, My class is background, panda, bottle and there are 1949 pictures.
and I'm using a moblienetv2 model
and segmentation_dataset.py has been modified as follow.
_MYDATA_INFORMATION = DatasetDescriptor(
splits_to_sizes={
'train': 975, # num of samples in images/training
'trainval': 1949,
'val': 974, # num of samples in images/validation
},
num_classes=3,
ignore_label=0,
)
train.py has been modified as follow.
flags.DEFINE_boolean('initialize_last_layer', False,
'Initialize the last layer.')
flags.DEFINE_boolean('last_layers_contain_logits_only', True,
'Only consider logits as last layers or not.')
train_utils.py has not been modified.
not_ignore_mask = tf.to_float(tf.not_equal(scaled_labels, ignore_label)) * loss_weight
I get some results, but not the perfect ones.
For example, the mask colors of panda and bottles are same or not distinct
The result that I want is panda of red and bottle of green
So, I judged that there was a problem with the weight.
Based on the other people's questions, train_utils.py was configured as follows
irgore_weight = 0
label0_weight =1
label1_weight = 10
label2_weight = 15
not_ignore_mask =
tf.to_float(tf.equal(scaled_labels, 0)) * label0_weight +
tf.to_float(tf.equal(scaled_labels, 1)) * label1_weight +
tf.to_float(tf.equal(scaled_labels, 2)) * label2_weight +
tf.to_float(tf.equal(scaled_labels, ignore_label)) * irgore_weight
tf.losses.softmax_cross_entropy(
one_hot_labels,
tf.reshape(logits, shape=[-1, num_classes]),
weights=not_ignore_mask,
scope=loss_scope)
I have a question here.
What are the criteria for the weight?
My data set consists of the following.
enter image description here
It's automatically generating, so I don't know exactly which one is more, but it's a similar amount.
And another thing, I'm using Pascal's color map type.
This is the first black background and the second red third green.
I want to designate pandas as red and bottles as green exactly. What should I do?
I think you might have mixed up your label definition. Maybe I can help you with that. Please check again your segmentation_dataset.py. Here, you define "0" as the ignored label. This means that all pixels which are labeled as "0" are excluded from the training process (more specifically, excluded in the calculation of the loss function and so have no influence in the updating of the weights). In the light of this situation it is crucial to not "ignore" the background class as it is also a class you want to predict correctly. In train_utils.py you assign a weightening factor to the ignored class which would have no effect- --> Make sure that you don't mix up your three training classes [background, panada, bottle] with the "ignored" tag.
In your case num_classes=3 should be correct as it specifies the number of labels to predict (the model automatically assumes these labels are 0, 1 and 2. If you want to ignore certain labels you have to annotate them with a fourth label class (just choose a number >2 for that) and then assign this label to ignored_label. If you don't have pixels to be ignored still set ignored_label=255 and it will not influence your training;)

How can I do template matching in opencv with colour?

I have been trying to use opencv's template matching function to match templates within images. However, when the images are dark brown and dark green, the template matching does not work so well. I am pretty sure it is the grey scale conversion that is responsible for this because in greyscale it looks very similar.
However from what I see, cv2.matchtemplate() only takes in grey scale images. How can I do coloured template matching? Should I seperate the RGB image into 3 images: one red, one green, one blue and treat each one as gray scale images and apply matchtemplate then sum the similarity rating for each pixel position? Is that the way to do it? Or is there a different function or a parameter value I can use to make matchtemplate work for coloured images?
You may try this code:
import numpy as np
import cv2
threshold = 0.8
##Read Main and Needle Image
imageMainRGB = cv2.imread(main/Image/Path/main.png)
imageNeedleRGB = cv2.imread(needle/Image/Path/needle.png)
##Split Both into each R, G, B Channel
imageMainR, imageMainG, imageMainB = cv2.split(imageMainRGB)
imageNeedleR, imageNeedleG, imageNeedleB = cv2.split(imageNeedleRGB)
##Matching each channel
resultB = cv2.matchTemplate(imageMainR, imageNeedleR, cv2.TM_SQDIFF)
resultG = cv2.matchTemplate(imageMainG, imageNeedleG, cv2.TM_SQDIFF)
resultR = cv2.matchTemplate(imageMainB, imageNeedleB, cv2.TM_SQDIFF)
##Add together to get the total score
result = resultB + resultG + resultR
loc = np.where(result >= 3 * threshold)
print("loc: ", loc)
The Image I tested with are:
main.png
needle.png
result.png
Remark: This code may not function in some photos, where a user may need to modify it further to enhance it.
Note: This image was getting from pexels.com which is free copyright. If you have any issues with the image copyright and want to take down this image, welcome to contact me. Thanks.

OpenCV detect blobs on the image

I need to find (and draw rect around)/get max and min radius blobs on the image. (samples below)
the problem is to find correct filters for the image that will allow Canny or Threshold transformation to highlight the blobs. then I going to use findContours to find the rectangles.
I've tryed:
Threshold - with different level
blur->erode->erode->grayscale->canny
change image tone with variety of "lines"
and ect. the better result was to detect piece (20-30%) of blob. and this info not allowed to draw rect around blob. also, thanks for shadows, not related to blob dots were detected, so that also prevents to detect the area.
as I understand I need to find counter that has hard contrast (not smooth like in shadow). Is there any way to do that with openCV?
Update
cases separately: image 1, image 2, image 3, image 4, image 5, image 6, image 7, image 8, image 9, image 10, image 11, image 12
One more Update
I believe that the blob have the contrast area at the edge. So, I've tried to make edge stronger: I've created 2 gray scale Mat: A and B, apply Gaussian blur for the second one - B (to reduce noise a bit), then I've made some calculations: goes around every pixel and find max difference between Xi,Yi of 'A' and nearby dots from 'B':
and apply max difference to Xi,Yi. so I get smth like this:
is i'm on the right way? btw, can I reach smth like this via OpenCV methods?
Update Image Denoising helps to reduce noize, Sobel - to highlight the contours, then threshold + findContours and custome convexHull gets smth similar I'm looking for but it not good for some blobs.
Since there are big differences between the input images, the algorithm should be able to adapt to the situation. Since Canny is based on detecting high frequencies, my algorithm treats the sharpness of the image as the parameter used for preprocessing adaptation. I didn't want to spend a week figuring out the functions for all the data, so I applied a simple, linear function based on 2 images and then tested with a third one. Here are my results:
Have in mind that this is a very basic approach and is only proving a point. It will need experiments, tests, and refining. The idea is to use Sobel and sum over all the pixels acquired. That, divided by the size of the image, should give you a basic estimation of high freq. response of the image. Now, experimentally, I found values of clipLimit for CLAHE filter that work in 2 test cases and found a linear function connecting the high freq. response of the input with a CLAHE filter, yielding good results.
sobel = get_sobel(img)
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557
That's the adaptive part. Now for the contours. It took me a while to figure out a correct way of filtering out the noise. I settled for a simple trick: using contours finding twice. First I use it to filter out the unnecessary, noisy contours. Then I continue with some morphological magic to end up with correct blobs for the objects being detected (more details in the code). The final step is to filter bounding rectangles based on the calculated mean, since, on all of the samples, the blobs are of relatively similar size.
import cv2
import numpy as np
def unsharp_mask(img, blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
gaussian = cv2.GaussianBlur(img, (5,5), 0)
return cv2.addWeighted(img, imgWeight, gaussian, gaussianWeight, 0)
def smoother_edges(img, first_blur_size, second_blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
img = cv2.GaussianBlur(img, first_blur_size, 0)
return unsharp_mask(img, second_blur_size, imgWeight, gaussianWeight)
def close_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
def open_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
def shrink_rect(rect, scale = 0.8):
center, (width, height), angle = rect
width = width * scale
height = height * scale
rect = center, (width, height), angle
return rect
def clahe(img, clip_limit = 2.0):
clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=(5,5))
return clahe.apply(img)
def get_sobel(img, size = -1):
sobelx64f = cv2.Sobel(img,cv2.CV_64F,2,0,size)
abs_sobel64f = np.absolute(sobelx64f)
return np.uint8(abs_sobel64f)
img = cv2.imread("blobs4.jpg")
# save color copy for visualizing
imgc = img.copy()
# resize image to make the analytics easier (a form of filtering)
resize_times = 5
img = cv2.resize(img, None, img, fx = 1 / resize_times, fy = 1 / resize_times)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# use sobel operator to evaluate high frequencies
sobel = get_sobel(img)
# experimentally calculated function - needs refining
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557
# don't apply clahe if there is enough high freq to find blobs
if(clip_limit < 1.0):
clip_limit = 0.1
# limit clahe if there's not enough details - needs more tests
if(clip_limit > 8.0):
clip_limit = 8
# apply clahe and unsharp mask to improve high frequencies as much as possible
img = clahe(img, clip_limit)
img = unsharp_mask(img)
# filter the image to ensure edge continuity and perform Canny
# (values selected experimentally, using trackbars)
img_blurred = (cv2.GaussianBlur(img.copy(), (2*2+1,2*2+1), 0))
canny = cv2.Canny(img_blurred, 35, 95)
# find first contours
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# prepare black image to draw contours
canvas = np.ones(img.shape, np.uint8)
for c in cnts:
l = cv2.arcLength(c, False)
x,y,w,h = cv2.boundingRect(c)
aspect_ratio = float(w)/h
# filter "bad" contours (values selected experimentally)
if l > 500:
continue
if l < 20:
continue
if aspect_ratio < 0.2:
continue
if aspect_ratio > 5:
continue
if l > 150 and (aspect_ratio > 10 or aspect_ratio < 0.1):
continue
# draw all the other contours
cv2.drawContours(canvas, [c], -1, (255, 255, 255), 2)
# perform closing and blurring, to close the gaps
canvas = close_image(canvas, (7,7))
img_blurred = cv2.GaussianBlur(canvas, (8*2+1,8*2+1), 0)
# smooth the edges a bit to make sure canny will find continuous edges
img_blurred = smoother_edges(img_blurred, (9,9))
kernel = np.ones((3,3), np.uint8)
# erode to make sure separate blobs are not touching each other
eroded = cv2.erode(img_blurred, kernel)
# perform necessary thresholding before Canny
_, im_th = cv2.threshold(eroded, 50, 255, cv2.THRESH_BINARY)
canny = cv2.Canny(im_th, 11, 33)
# find contours again. this time mostly the right ones
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# calculate the mean area of the contours' bounding rectangles
sum_area = 0
rect_list = []
for i,c in enumerate(cnts):
rect = cv2.minAreaRect(c)
_, (width, height), _ = rect
area = width*height
sum_area += area
rect_list.append(rect)
mean_area = sum_area / len(cnts)
# choose only rectangles that fulfill requirement:
# area > mean_area*0.6
for rect in rect_list:
_, (width, height), _ = rect
box = cv2.boxPoints(rect)
box = np.int0(box * 5)
area = width * height
if(area > mean_area*0.6):
# shrink the rectangles, since the shadows and reflections
# make the resulting rectangle a bit bigger
# the value was guessed - might need refinig
rect = shrink_rect(rect, 0.8)
box = cv2.boxPoints(rect)
box = np.int0(box * resize_times)
cv2.drawContours(imgc, [box], 0, (0,255,0),1)
# resize for visualizing purposes
imgc = cv2.resize(imgc, None, imgc, fx = 0.5, fy = 0.5)
cv2.imshow("imgc", imgc)
cv2.imwrite("result3.png", imgc)
cv2.waitKey(0)
Overall I think that's a very interesting problem, a little bit too big to be answered here. The approach I presented is due to be treated as a road sign, not a complete solution. Tha basic idea being:
Adaptive preprocessing.
Finding contours twice: for filtering and then for the actual classification.
Filtering the blobs based on their mean size.
Thanks for the fun and good luck!
Here is the code I used:
import cv2
from sympy import Point, Ellipse
import numpy as np
x1='C:\\Users\\Desktop\\python\\stack_over_flow\\XsXs9.png'
image = cv2.imread(x1,0)
image1 = cv2.imread(x1,1)
x,y=image.shape
median = cv2.GaussianBlur(image,(9,9),0)
median1 = cv2.GaussianBlur(image,(21,21),0)
a=median1-median
c=255-a
ret,thresh1 = cv2.threshold(c,12,255,cv2.THRESH_BINARY)
kernel=np.ones((5,5),np.uint8)
dilation = cv2.dilate(thresh1,kernel,iterations = 1)
kernel=np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(dilation, cv2.MORPH_OPEN, kernel)
cv2.imwrite('D:\\test12345.jpg',opening)
ret,contours,hierarchy = cv2.findContours(opening,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
c=np.size(contours[:])
Blank_window=np.zeros([x,y,3])
Blank_window=np.uint8(Blank_window)
for u in range(0,c-1):
if (np.size(contours[u])>200):
ellipse = cv2.fitEllipse(contours[u])
(center,axes,orientation) =ellipse
majoraxis_length = max(axes)
minoraxis_length = min(axes)
eccentricity=(np.sqrt(1-(minoraxis_length/majoraxis_length)**2))
if (eccentricity<0.8):
cv2.drawContours(image1, contours, u, (255,1,255), 3)
cv2.imwrite('D:\\marked.jpg',image1)
Here problem is to find a near circular object. This simple solution is based on finding the eccentricity for each and every contour. Such objects being detected is the drop of water.
I have a partial solution in place.
FIRST
I initially converted the image to the HSV color space and tinkered with the value channel. On doing so I came across something unique. In almost every image, the droplets have a tiny reflection of light. This was highlighted distinctly in the value channel.
Upon inverting this I was able to obtain the following:
Sample 1:
Sample 2:
Sample 3:
SECOND
Now we have to extract the location of those points. To do so I performed anomaly detection on the inverted value channel obtained. By anomaly I mean the black dot present in them.
In order to do this I calculated the median of the inverted value channel. I allotted pixel value within 70% above and below the median to be treated as normal pixels. But every pixel value lying beyond this range to be anomalies. The black dots fit perfectly there.
Sample 1:
Sample 2:
Sample 3:
It did not turn out well for few images.
As you can see the black dot is due to the reflection of light which is unique to the droplets of water. Other circular edges might be present in the image but the reflection distinguishes the droplet from those edges.
THIRD
Now since we have the location of these black dots, we can perform Difference of Gaussians (DoG) (also mentioned in the update of the question) and obtain relevant edge information. If the obtained location of the black dots lie within the edges discovered it is said to be a water droplet.
Disclaimer: This method does not work for all the images. You can add your suggestions to this.
Good day , I am working on this subject and my advice to you is; First, after using many denoising filters such as Gaussian filters, process the image after that.
You can blob-detection these circles not with countors.

Comparing two images - Detect egg in a nest

I have a webcam directly over a chicken nest. This camera takes images and uploads them to a folder on a server. I'd like to detect if an egg has been laid from this image.
I'm thinking the best method would be to compare the contrast as the egg will be much more reflective than the straw nest. (The camera has Infrared so the image is partly grey scale)
I'd like to do this in .NET if possible.
Try to resize your image to a smaller size, maybe 10 x 10 pixel. This averages out any small disturbing details.
Const N As Integer = 10
Dim newImage As New Bitmap(N, N)
Dim fromCamera As Image = Nothing ' Get image from camera here
Using gr As Graphics = Graphics.FromImage(newImage)
gr.SmoothingMode = SmoothingMode.HighSpeed
gr.InterpolationMode = InterpolationMode.Bilinear
gr.PixelOffsetMode = PixelOffsetMode.HighSpeed
gr.DrawImage(fromCamera, New Rectangle(0, 0, N, N))
End Using
Note: you do not need a high quality, but you need a good averaging. Maybe you will have to test different quality settings.
Since now, a pixel covers a large area of your original image, a bright pixel is very likely part of an egg. It might also be a good idea to compare the brightness of the brightest pixel to the average image brightness, since that would reduce problems due to global illumination changes.
EDIT (in response to comment):
Your code is well structured and makes sense. Here some thoughts:
Calculate the gray value from the color value with:
Dim grayValue = c.R * 0.3 + c.G * 0.59 + c.B * 0.11
... instead of comparing the three color components separately. The different weights are due to the fact, that we perceive green stronger than red and red stronger than blue. Again, we do not want a beautiful thumbnail we want a good contrast. Therefore, you might want to do some experiments here as well. May be it is sufficient to use only the red component. Dependent on lighting conditions one color component might yield a better contrast than others. I would recommend, to make the gray conversion part of the thumbnail creation and to write the thumbnails to a file or to the screen. This would allow you to play with the different settings (size of the thumbnail, resizing parameters, color to gray conversion, etc.) and to compare the (intermediate) results visually. Creating a bitmap (bmp) with the (end-)result is a very good idea.
The Using statement does the Dispose() for you. It does it even if an exception should occur before End Using (There is a hidden Try Finally involved).