Add ROI to 1D image (intensity plot etc.) - dm-script

Try to add ROI of determined position to 1D image (intensity plot, or line scan proflie). It does not work, could not find the function by google, is there correct codes?
number d0, d1
image spc:=getFrontImage()
spc.getSize(d0,d1)
imagedisplay imgdisp = spc.ImageGetImageDisplay(0)
ROI roi_1 = NewROI()
ROISetRectangle( roi_1, 0, 100,d1 , 200 )
imgdisp.ImageDisplayAddROI( roi_1)
imagedisplaysetroiselected(imgdisp, roi_1,1)

The exact error message when running your code is:
This very accurately tells you the problem: Not all types of ROIS (Rectangle, Line, Lasso, Point,...) are suitable for all kind of displays. In particular, you can't draw a rectangle-ROI on a LinePlot display.
In fact, LinePlot displays support a single type of ROI: Range-ROIs.
So, the code for you problem is:
image spc:=getFrontImage()
imagedisplay imgdisp = spc.ImageGetImageDisplay(0)
ROI roi_1 = NewROI()
ROISetRange( roi_1, 100, 200 )
imgdisp.ImageDisplayAddROI( roi_1)
imagedisplaysetroiselected(imgdisp, roi_1,1)

Related

How to fill a line in 2D image along a given radius with the data in a given line image?

I want to fill a 2D image along its polar radius, the data are stored in a image where each row or column corresponds to the radius in target image. How can I fill the target image efficiently? Such as with iradius or some functions? I do not prefer a pix-pix operation.
Are you looking for something like this?
number maxR = 100
image rValues := realimage("I(r)",4,maxR)
rValues = 10 + trunc(100*random())
image plot :=realimage("Ring",4,2*maxR,2*maxR)
rValues.ShowImage()
plot.ShowImage()
plot = rValues.warp(iradius,0)
You might also want to check out the relevant example code from the F1 help documentation of GMS itself:
Explaining warp a bit:
plot = rValues.warp(iradius,0)
Assigns values to plot based on a value-lookup in rValues.
For each pixel in plot a coordinate position in rValues is computed, and the value is simply looked up. If the computed coordinate is non-integer, bilinear interpolation between the 4 closest points is used.
In the example, the two 'formulas' for the coordinate calculation are simple x' = iradius and y' = 0 where iradius is an expression computed from the coordinate in plot, for convenience.
You can feed any expression into the parameters for warp( ) and the command is closely related to just using the square bracket notation of addressing values. In fact, the only difference is that warp performs the bilinear interpolation of values instead of truncating the coordinates to integer values.

How to Zero Pad RGB Image?

I want to Pad an RGB Image of size 500x500x3 to 512x512x3. I understand that I need to add 6 pixels on each border but I cannot figure out how. I have read numpy.pad function docs but couldn't understand how to use it. Code snippets would be appreciated.
If you need to pad 0:
RGB = np.pad(RGB, pad_width=[(6, 6),(6, 6),(0, 0)], mode='constant')
Use constant_values argument to pad different values (default is 0):
RGB = np.pad(RGB, pad_width=[(6, 6),(6, 6),(0, 0)], mode='constant', constant_values=0, constant_values=[(3,3),(5,5),(0,0)]))
We can try to get a solution by adding border padding, but it would get a bit complex. I would like to suggest you can alternate approach. First we can create a canvas of size 512x512 and then we place your original image inside this canvas. You can get help from the following code:
import numpy as np
# Create a larger black colored canvas
canvas = np.zeros(512, 512, 3)
canvas[6:506, 6:506] = your_500_500_img
Obviously you can convert 6 and 506 to a more generalized variable and use it as padding, 512-padding, etc. but this code illustrates the concept.

OpenCV detect blobs on the image

I need to find (and draw rect around)/get max and min radius blobs on the image. (samples below)
the problem is to find correct filters for the image that will allow Canny or Threshold transformation to highlight the blobs. then I going to use findContours to find the rectangles.
I've tryed:
Threshold - with different level
blur->erode->erode->grayscale->canny
change image tone with variety of "lines"
and ect. the better result was to detect piece (20-30%) of blob. and this info not allowed to draw rect around blob. also, thanks for shadows, not related to blob dots were detected, so that also prevents to detect the area.
as I understand I need to find counter that has hard contrast (not smooth like in shadow). Is there any way to do that with openCV?
Update
cases separately: image 1, image 2, image 3, image 4, image 5, image 6, image 7, image 8, image 9, image 10, image 11, image 12
One more Update
I believe that the blob have the contrast area at the edge. So, I've tried to make edge stronger: I've created 2 gray scale Mat: A and B, apply Gaussian blur for the second one - B (to reduce noise a bit), then I've made some calculations: goes around every pixel and find max difference between Xi,Yi of 'A' and nearby dots from 'B':
and apply max difference to Xi,Yi. so I get smth like this:
is i'm on the right way? btw, can I reach smth like this via OpenCV methods?
Update Image Denoising helps to reduce noize, Sobel - to highlight the contours, then threshold + findContours and custome convexHull gets smth similar I'm looking for but it not good for some blobs.
Since there are big differences between the input images, the algorithm should be able to adapt to the situation. Since Canny is based on detecting high frequencies, my algorithm treats the sharpness of the image as the parameter used for preprocessing adaptation. I didn't want to spend a week figuring out the functions for all the data, so I applied a simple, linear function based on 2 images and then tested with a third one. Here are my results:
Have in mind that this is a very basic approach and is only proving a point. It will need experiments, tests, and refining. The idea is to use Sobel and sum over all the pixels acquired. That, divided by the size of the image, should give you a basic estimation of high freq. response of the image. Now, experimentally, I found values of clipLimit for CLAHE filter that work in 2 test cases and found a linear function connecting the high freq. response of the input with a CLAHE filter, yielding good results.
sobel = get_sobel(img)
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557
That's the adaptive part. Now for the contours. It took me a while to figure out a correct way of filtering out the noise. I settled for a simple trick: using contours finding twice. First I use it to filter out the unnecessary, noisy contours. Then I continue with some morphological magic to end up with correct blobs for the objects being detected (more details in the code). The final step is to filter bounding rectangles based on the calculated mean, since, on all of the samples, the blobs are of relatively similar size.
import cv2
import numpy as np
def unsharp_mask(img, blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
gaussian = cv2.GaussianBlur(img, (5,5), 0)
return cv2.addWeighted(img, imgWeight, gaussian, gaussianWeight, 0)
def smoother_edges(img, first_blur_size, second_blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
img = cv2.GaussianBlur(img, first_blur_size, 0)
return unsharp_mask(img, second_blur_size, imgWeight, gaussianWeight)
def close_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
def open_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
def shrink_rect(rect, scale = 0.8):
center, (width, height), angle = rect
width = width * scale
height = height * scale
rect = center, (width, height), angle
return rect
def clahe(img, clip_limit = 2.0):
clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=(5,5))
return clahe.apply(img)
def get_sobel(img, size = -1):
sobelx64f = cv2.Sobel(img,cv2.CV_64F,2,0,size)
abs_sobel64f = np.absolute(sobelx64f)
return np.uint8(abs_sobel64f)
img = cv2.imread("blobs4.jpg")
# save color copy for visualizing
imgc = img.copy()
# resize image to make the analytics easier (a form of filtering)
resize_times = 5
img = cv2.resize(img, None, img, fx = 1 / resize_times, fy = 1 / resize_times)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# use sobel operator to evaluate high frequencies
sobel = get_sobel(img)
# experimentally calculated function - needs refining
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557
# don't apply clahe if there is enough high freq to find blobs
if(clip_limit < 1.0):
clip_limit = 0.1
# limit clahe if there's not enough details - needs more tests
if(clip_limit > 8.0):
clip_limit = 8
# apply clahe and unsharp mask to improve high frequencies as much as possible
img = clahe(img, clip_limit)
img = unsharp_mask(img)
# filter the image to ensure edge continuity and perform Canny
# (values selected experimentally, using trackbars)
img_blurred = (cv2.GaussianBlur(img.copy(), (2*2+1,2*2+1), 0))
canny = cv2.Canny(img_blurred, 35, 95)
# find first contours
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# prepare black image to draw contours
canvas = np.ones(img.shape, np.uint8)
for c in cnts:
l = cv2.arcLength(c, False)
x,y,w,h = cv2.boundingRect(c)
aspect_ratio = float(w)/h
# filter "bad" contours (values selected experimentally)
if l > 500:
continue
if l < 20:
continue
if aspect_ratio < 0.2:
continue
if aspect_ratio > 5:
continue
if l > 150 and (aspect_ratio > 10 or aspect_ratio < 0.1):
continue
# draw all the other contours
cv2.drawContours(canvas, [c], -1, (255, 255, 255), 2)
# perform closing and blurring, to close the gaps
canvas = close_image(canvas, (7,7))
img_blurred = cv2.GaussianBlur(canvas, (8*2+1,8*2+1), 0)
# smooth the edges a bit to make sure canny will find continuous edges
img_blurred = smoother_edges(img_blurred, (9,9))
kernel = np.ones((3,3), np.uint8)
# erode to make sure separate blobs are not touching each other
eroded = cv2.erode(img_blurred, kernel)
# perform necessary thresholding before Canny
_, im_th = cv2.threshold(eroded, 50, 255, cv2.THRESH_BINARY)
canny = cv2.Canny(im_th, 11, 33)
# find contours again. this time mostly the right ones
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# calculate the mean area of the contours' bounding rectangles
sum_area = 0
rect_list = []
for i,c in enumerate(cnts):
rect = cv2.minAreaRect(c)
_, (width, height), _ = rect
area = width*height
sum_area += area
rect_list.append(rect)
mean_area = sum_area / len(cnts)
# choose only rectangles that fulfill requirement:
# area > mean_area*0.6
for rect in rect_list:
_, (width, height), _ = rect
box = cv2.boxPoints(rect)
box = np.int0(box * 5)
area = width * height
if(area > mean_area*0.6):
# shrink the rectangles, since the shadows and reflections
# make the resulting rectangle a bit bigger
# the value was guessed - might need refinig
rect = shrink_rect(rect, 0.8)
box = cv2.boxPoints(rect)
box = np.int0(box * resize_times)
cv2.drawContours(imgc, [box], 0, (0,255,0),1)
# resize for visualizing purposes
imgc = cv2.resize(imgc, None, imgc, fx = 0.5, fy = 0.5)
cv2.imshow("imgc", imgc)
cv2.imwrite("result3.png", imgc)
cv2.waitKey(0)
Overall I think that's a very interesting problem, a little bit too big to be answered here. The approach I presented is due to be treated as a road sign, not a complete solution. Tha basic idea being:
Adaptive preprocessing.
Finding contours twice: for filtering and then for the actual classification.
Filtering the blobs based on their mean size.
Thanks for the fun and good luck!
Here is the code I used:
import cv2
from sympy import Point, Ellipse
import numpy as np
x1='C:\\Users\\Desktop\\python\\stack_over_flow\\XsXs9.png'
image = cv2.imread(x1,0)
image1 = cv2.imread(x1,1)
x,y=image.shape
median = cv2.GaussianBlur(image,(9,9),0)
median1 = cv2.GaussianBlur(image,(21,21),0)
a=median1-median
c=255-a
ret,thresh1 = cv2.threshold(c,12,255,cv2.THRESH_BINARY)
kernel=np.ones((5,5),np.uint8)
dilation = cv2.dilate(thresh1,kernel,iterations = 1)
kernel=np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(dilation, cv2.MORPH_OPEN, kernel)
cv2.imwrite('D:\\test12345.jpg',opening)
ret,contours,hierarchy = cv2.findContours(opening,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
c=np.size(contours[:])
Blank_window=np.zeros([x,y,3])
Blank_window=np.uint8(Blank_window)
for u in range(0,c-1):
if (np.size(contours[u])>200):
ellipse = cv2.fitEllipse(contours[u])
(center,axes,orientation) =ellipse
majoraxis_length = max(axes)
minoraxis_length = min(axes)
eccentricity=(np.sqrt(1-(minoraxis_length/majoraxis_length)**2))
if (eccentricity<0.8):
cv2.drawContours(image1, contours, u, (255,1,255), 3)
cv2.imwrite('D:\\marked.jpg',image1)
Here problem is to find a near circular object. This simple solution is based on finding the eccentricity for each and every contour. Such objects being detected is the drop of water.
I have a partial solution in place.
FIRST
I initially converted the image to the HSV color space and tinkered with the value channel. On doing so I came across something unique. In almost every image, the droplets have a tiny reflection of light. This was highlighted distinctly in the value channel.
Upon inverting this I was able to obtain the following:
Sample 1:
Sample 2:
Sample 3:
SECOND
Now we have to extract the location of those points. To do so I performed anomaly detection on the inverted value channel obtained. By anomaly I mean the black dot present in them.
In order to do this I calculated the median of the inverted value channel. I allotted pixel value within 70% above and below the median to be treated as normal pixels. But every pixel value lying beyond this range to be anomalies. The black dots fit perfectly there.
Sample 1:
Sample 2:
Sample 3:
It did not turn out well for few images.
As you can see the black dot is due to the reflection of light which is unique to the droplets of water. Other circular edges might be present in the image but the reflection distinguishes the droplet from those edges.
THIRD
Now since we have the location of these black dots, we can perform Difference of Gaussians (DoG) (also mentioned in the update of the question) and obtain relevant edge information. If the obtained location of the black dots lie within the edges discovered it is said to be a water droplet.
Disclaimer: This method does not work for all the images. You can add your suggestions to this.
Good day , I am working on this subject and my advice to you is; First, after using many denoising filters such as Gaussian filters, process the image after that.
You can blob-detection these circles not with countors.

Matplotlib difference between two images

I have images (4000x2000 pixels) that are derived from the same image, but with subtle differences in less than 1% of the pixels. I'd like to plot the two images side-by-side and highlight the regions of the array's that are different (by highlight I mean I want the pixels that differ to jump out, but still display the color that matches their value. I've been using rectangles that are unfilled to outline the edges of such pixels so far. I can do this very nicely in small images (~50x50) with:
fig=figure(figsize=(20,15))
ax1=fig.add_subplot(1,2,1)
imshow(image1,interpolation='nearest',origin='lower left')
colorbar()
ax2=fig.add_subplot(122,sharex=ax1, sharey=ax1)
imshow(image2,interpolation='nearest',origin='lower left')
colorbar()
#now show differences
Xspots=im1!=im2
Xx,Xy=nonzero(Xspots)
for x,y in zip(Xx,Xy):
rect=Rectangle((y-.5,x-.5),1,1,color='w',fill=False,ec='w')
ax1.add_patch(rect)
ax2.add_patch(rect)
However this doesn't work so well when the image is very large. Strange things happen, for example when I zoom in the patch disappears. Also, this way sucks because it takes forever to load things when I zoom in/out.
I feel like there must be a better way to do this, maybe one where there is only one patch that determines where all of the things are, rather than a whole bunch of patches. I could do a scatter plot on top of the imshow image, but I don't know how to fix it so that the points will stay exactly the size of the pixel when I zoom in/out.
Any ideas?
I would try something with the alpha channel:
import copy
N, M = 20, 40
test_data = np.random.rand(N, M)
mark_mask = np.random.rand(N, M) < .01 # mask 1%
# this is redundant in this case, but in general you need it
my_norm = matplotlib.colors.Normalize(vmin=0, vmax=1)
# grab a copy of the color map
my_cmap = copy.copy(cm.get_cmap('cubehelix'))
c_data= my_cmap(my_norm(test_data))
c_data[:, :, 3] = .5 # make everything half alpha
c_data[mark_mask, 3] = 1 # reset the marked pixels as full opacity
# plot it
figure()
imshow(c_data, interpolation='none')
No idea if this will work with your data or not.

pyplot scatter plot marker size

In the pyplot document for scatter plot:
matplotlib.pyplot.scatter(x, y, s=20, c='b', marker='o', cmap=None, norm=None,
vmin=None, vmax=None, alpha=None, linewidths=None,
faceted=True, verts=None, hold=None, **kwargs)
The marker size
s:
size in points^2. It is a scalar or an array of the same length as x and y.
What kind of unit is points^2? What does it mean? Does s=100 mean 10 pixel x 10 pixel?
Basically I'm trying to make scatter plots with different marker sizes, and I want to figure out what does the s number mean.
This can be a somewhat confusing way of defining the size but you are basically specifying the area of the marker. This means, to double the width (or height) of the marker you need to increase s by a factor of 4. [because A = WH => (2W)(2H)=4A]
There is a reason, however, that the size of markers is defined in this way. Because of the scaling of area as the square of width, doubling the width actually appears to increase the size by more than a factor 2 (in fact it increases it by a factor of 4). To see this consider the following two examples and the output they produce.
# doubling the width of markers
x = [0,2,4,6,8,10]
y = [0]*len(x)
s = [20*4**n for n in range(len(x))]
plt.scatter(x,y,s=s)
plt.show()
gives
Notice how the size increases very quickly. If instead we have
# doubling the area of markers
x = [0,2,4,6,8,10]
y = [0]*len(x)
s = [20*2**n for n in range(len(x))]
plt.scatter(x,y,s=s)
plt.show()
gives
Now the apparent size of the markers increases roughly linearly in an intuitive fashion.
As for the exact meaning of what a 'point' is, it is fairly arbitrary for plotting purposes, you can just scale all of your sizes by a constant until they look reasonable.
Edit: (In response to comment from #Emma)
It's probably confusing wording on my part. The question asked about doubling the width of a circle so in the first picture for each circle (as we move from left to right) it's width is double the previous one so for the area this is an exponential with base 4. Similarly the second example each circle has area double the last one which gives an exponential with base 2.
However it is the second example (where we are scaling area) that doubling area appears to make the circle twice as big to the eye. Thus if we want a circle to appear a factor of n bigger we would increase the area by a factor n not the radius so the apparent size scales linearly with the area.
Edit to visualize the comment by #TomaszGandor:
This is what it looks like for different functions of the marker size:
x = [0,2,4,6,8,10,12,14,16,18]
s_exp = [20*2**n for n in range(len(x))]
s_square = [20*n**2 for n in range(len(x))]
s_linear = [20*n for n in range(len(x))]
plt.scatter(x,[1]*len(x),s=s_exp, label='$s=2^n$', lw=1)
plt.scatter(x,[0]*len(x),s=s_square, label='$s=n^2$')
plt.scatter(x,[-1]*len(x),s=s_linear, label='$s=n$')
plt.ylim(-1.5,1.5)
plt.legend(loc='center left', bbox_to_anchor=(1.1, 0.5), labelspacing=3)
plt.show()
Because other answers here claim that s denotes the area of the marker, I'm adding this answer to clearify that this is not necessarily the case.
Size in points^2
The argument s in plt.scatter denotes the markersize**2. As the documentation says
s : scalar or array_like, shape (n, ), optional
size in points^2. Default is rcParams['lines.markersize'] ** 2.
This can be taken literally. In order to obtain a marker which is x points large, you need to square that number and give it to the s argument.
So the relationship between the markersize of a line plot and the scatter size argument is the square. In order to produce a scatter marker of the same size as a plot marker of size 10 points you would hence call scatter( .., s=100).
import matplotlib.pyplot as plt
fig,ax = plt.subplots()
ax.plot([0],[0], marker="o", markersize=10)
ax.plot([0.07,0.93],[0,0], linewidth=10)
ax.scatter([1],[0], s=100)
ax.plot([0],[1], marker="o", markersize=22)
ax.plot([0.14,0.86],[1,1], linewidth=22)
ax.scatter([1],[1], s=22**2)
plt.show()
Connection to "area"
So why do other answers and even the documentation speak about "area" when it comes to the s parameter?
Of course the units of points**2 are area units.
For the special case of a square marker, marker="s", the area of the marker is indeed directly the value of the s parameter.
For a circle, the area of the circle is area = pi/4*s.
For other markers there may not even be any obvious relation to the area of the marker.
In all cases however the area of the marker is proportional to the s parameter. This is the motivation to call it "area" even though in most cases it isn't really.
Specifying the size of the scatter markers in terms of some quantity which is proportional to the area of the marker makes in thus far sense as it is the area of the marker that is perceived when comparing different patches rather than its side length or diameter. I.e. doubling the underlying quantity should double the area of the marker.
What are points?
So far the answer to what the size of a scatter marker means is given in units of points. Points are often used in typography, where fonts are specified in points. Also linewidths is often specified in points. The standard size of points in matplotlib is 72 points per inch (ppi) - 1 point is hence 1/72 inches.
It might be useful to be able to specify sizes in pixels instead of points. If the figure dpi is 72 as well, one point is one pixel. If the figure dpi is different (matplotlib default is fig.dpi=100),
1 point == fig.dpi/72. pixels
While the scatter marker's size in points would hence look different for different figure dpi, one could produce a 10 by 10 pixels^2 marker, which would always have the same number of pixels covered:
import matplotlib.pyplot as plt
for dpi in [72,100,144]:
fig,ax = plt.subplots(figsize=(1.5,2), dpi=dpi)
ax.set_title("fig.dpi={}".format(dpi))
ax.set_ylim(-3,3)
ax.set_xlim(-2,2)
ax.scatter([0],[1], s=10**2,
marker="s", linewidth=0, label="100 points^2")
ax.scatter([1],[1], s=(10*72./fig.dpi)**2,
marker="s", linewidth=0, label="100 pixels^2")
ax.legend(loc=8,framealpha=1, fontsize=8)
fig.savefig("fig{}.png".format(dpi), bbox_inches="tight")
plt.show()
If you are interested in a scatter in data units, check this answer.
You can use markersize to specify the size of the circle in plot method
import numpy as np
import matplotlib.pyplot as plt
x1 = np.random.randn(20)
x2 = np.random.randn(20)
plt.figure(1)
# you can specify the marker size two ways directly:
plt.plot(x1, 'bo', markersize=20) # blue circle with size 10
plt.plot(x2, 'ro', ms=10,) # ms is just an alias for markersize
plt.show()
From here
It is the area of the marker. I mean if you have s1 = 1000 and then s2 = 4000, the relation between the radius of each circle is: r_s2 = 2 * r_s1. See the following plot:
plt.scatter(2, 1, s=4000, c='r')
plt.scatter(2, 1, s=1000 ,c='b')
plt.scatter(2, 1, s=10, c='g')
I had the same doubt when I saw the post, so I did this example then I used a ruler on the screen to measure the radii.
I also attempted to use 'scatter' initially for this purpose. After quite a bit of wasted time - I settled on the following solution.
import matplotlib.pyplot as plt
input_list = [{'x':100,'y':200,'radius':50, 'color':(0.1,0.2,0.3)}]
output_list = []
for point in input_list:
output_list.append(plt.Circle((point['x'], point['y']), point['radius'], color=point['color'], fill=False))
ax = plt.gca(aspect='equal')
ax.cla()
ax.set_xlim((0, 1000))
ax.set_ylim((0, 1000))
for circle in output_list:
ax.add_artist(circle)
This is based on an answer to this question
If the size of the circles corresponds to the square of the parameter in s=parameter, then assign a square root to each element you append to your size array, like this: s=[1, 1.414, 1.73, 2.0, 2.24] such that when it takes these values and returns them, their relative size increase will be the square root of the squared progression, which returns a linear progression.
If I were to square each one as it gets output to the plot: output=[1, 2, 3, 4, 5]. Try list interpretation: s=[numpy.sqrt(i) for i in s]