Convert Image into Flag - jython

I have to change any given picture into the Pride flag.
To do so, my current code is:
pixels = getPixels(picture)
width = getWidth(picture)
height = getHeight(picture)
for index in range(0,len(pixels)/7):
pixel = pixels[index]
setColor(pixel,red)
for index in range(len(pixels)/7,len(pixels)):
pixel = pixels[index]
setColor(pixel,orange)
for index in range(2*len(pixels)/7,len(pixels)):
pixel = pixels[index]
setColor(pixel,yellow)
Note: I have not included the entire code snippet, it continues on in that same manner.
The problem is that the lines colours are interjecting into each other and it shows up as this:
What could be causing this and how do I go about fixing it?

It would seem there is an uneven about of pixels for the line divisions you are doing. Find out how many pixels there are for each line then use that in your for loops. For example...
no_pixels_per_line = len(pixels) / height
for index in range(0, (height/7) * no_pixels_per_line):
pixel = pixels[index]
setColor(pixel,red)
etc...

Related

Changing the size of a Qpixmap

how i can change the size/shape of Qpixmap to receive a result like this?
original
Result
QTransform can create basic shape transformations, which can then be applied to QPixmap.transformed().
This specific case uses a "perspective" transformation, which uses projection and scaling information, and it's achieved using QTransform.quadToQuad(). Note that QTransform also provides squareToQuad(), but it's sometimes unreliable.
The important thing is to create two QPolygonF instances, with the first based on the rectangle of the image, and the second with those corners at their "projected" points.
Note that creating a QPolygonF from a rectangle results in a polygon with 5 points, with the last one being the first in order to make it "closed". The QTransform quadToQuad, instead, requires a 4 points, so you have to remove the last point. Also note that the corners must be in the same order, so they will be: top left, top right, bottom right, bottom left.
In the following example, the perspective moves the top right corner at 20% of the height and the bottom right at the same opposite point (height - 80% of height).
class ProjectionTest(QtWidgets.QWidget):
def __init__(self):
super().__init__()
layout = QtWidgets.QVBoxLayout(self)
source = QtGui.QPixmap('square.jpg')
layout.addWidget(QtWidgets.QLabel(pixmap=source))
# the original rectangle of the image, as a QRectF (floating point)
rect = QtCore.QRectF(source.rect())
# the source polygon, ignoring the last point
square = QtGui.QPolygonF(rect)[:4]
# the "projected" square
cone = QtGui.QPolygonF([
rect.topLeft(),
QtCore.QPointF(rect.right(), rect.height() * .2),
QtCore.QPointF(rect.right(), rect.height() * .8),
rect.bottomLeft(),
])
transform = QtGui.QTransform()
if QtGui.QTransform.quadToQuad(square, cone, transform):
new = source.transformed(transform, QtCore.Qt.SmoothTransformation)
layout.addWidget(QtWidgets.QLabel(pixmap=new))
And this is the final result:

The code to set placement and font size of caption/legend of line plot image in DM

Try to get the code to set placement and font size of caption/legend of line plot image by DM scripting, as show in the attached pics, can not find them in the help files, only get LinePlotImageDisplayIsLegendShown, any suggestions? thx
Multiple questions, so one at a time
The legend section of a lineplot can not be altered. It can only be shown or hidden. It's size and position is always automatically determined by the application.
The caption font and text size can not be altered by script with the current version of GMS. You may want to file a feature-request for such a script command here: Gatan issue/bug submission form.
The placement coordinates of the window in your screen-shot represents just the coordinates and size of the image window on the workspace and not any of the lineplot content. So you would set it like:
image plot := RealImage("Plot",4,500)
plot = sin( icol/iwidth*5*pi() )
plot.ShowImage()
ImageDocument doc = plot.ImageGetOrCreateImageDocument()
DocumentWindow win = doc.ImageDocumentGetWindow()
number posX = 100
number posY = 50
win.WindowSetFramePosition( posX, posY )
number w = 1200
number h = 200
win.WindowSetFrameSize( w, h )
More commands on window placement are found here:

combine two pictures(different size) horizontally?

I have 2 pictures, need to combine them horizontally. I know numpy and cv2(opencv) should help me to do this. But don't know how.
I used img1 = cv2.imread(file1), img2 = cv2.imread(file2)
the 2 images' shape are (2048, 1334, 3) and (720, 1200, 3)
How could I do this? when I open These 2 images, they have similar height, different width.
I only know if the 2 pics have the same size, then just use concate, but my 2 pics are different sizes.
For the final output, I want to have them keep their own width, height choose the biggest/smallest...
So I imagine the final output should maybe 2/3 width one picture, 1/3 width the other pic which is totally good. I don't need these 2 are evenly distributed. Just keep their own width. Thanks!
You need to either trim a bit of the bottom of the taller image or add some black pixels.
In order to trim a part of the image, you can do:
trimmed = image2[:image1.shape[0],:,:]
This keeps only the lines from 0 up to the height of image1.
Or, you can add some black pixels:
black = np.zeros(image1.shape[0] - image2.shape[0], image1.shape[1])
image2 = np.hstack(image2, black)
And then you vertically concatenate.
I just solved my question.
Basically use cv2.resize()function to resize the images
Then simply concatenate them horizontally or vertically.
Just change the axis.
img1 = cv2.imread('xxx.png')
img2 = cv2.imread('yyy.jpg')
then compare img1.shape() and img2.shape()
Use resize()function to make them width the same or height the same.
vis = np.concatenate((img1, img2), axis=1)
cv2.imwrite('out.png', vis)

How would one draw an arbitrary curve in createJS

I am attempting to write a function using createJS to draw an arbitrary function and I'm having some trouble. I come from a d3 background so I'm having trouble breaking out of the data-binding mentality.
Suppose I have 2 arrays xData = [-10, -9, ... 10] and yData = Gaussian(xData) which is psuedocode for mapping each element of xData to its value on the bell curve. How can I now draw yData as a function of xData?
Thanks
To graph an arbitrary function in CreateJS, you draw lines connecting all the data points you have. Because, well, that's what graphing is!
The easiest way to do this is a for loop going through each of your data points, and calling a lineTo() for each. Because the canvas drawing API starts a line where you last 'left off', you actually don't even need to specify the line start for each line, but you DO have to move the canvas 'pen' to the first point before you start drawing. Something like:
// first make our shape to draw into.
let graph = new createjs.Shape();
let g = graph.graphics
g.beginStroke("#000");
xStart = xData[0];
yStart = yourFunction(xData[0]);
g.moveTo(xStart, yStart);
for( let i = 1; i < xData.length; i++){
nextX = xData[i], but normalized to fit on your graph area;
nextY = yourFunction(xData[i]), but similarly normalized;
g.lineTo(nextX, nextY);
}
This should get a basic version of the function drawing! Note that the line will be pretty jagged if you don't have a lot of data points, and you'll have to treat (normalize) your data to make it fit onto your screen. For instance, if you start at -10 for X, that's off the screen to the left by 10 pixels - and if it only runs from -10 to +10, your entire graph will be squashed into only 20 pixels of width.
I have a codepen showing this approach to graphing here. It's mapped to hit every pixel on the viewport and calculate a Y value for it, though, rather than your case where you have input X values. And FYI, the code for graphing is all inside the 'run' function at the top - everything in the PerlinNoiseMachine class is all about data generation, so you can ignore it for the purposes of this question.
Hope that helps! If you have any specific follow-up questions or code samples, please amend your question.

OpenCV detect blobs on the image

I need to find (and draw rect around)/get max and min radius blobs on the image. (samples below)
the problem is to find correct filters for the image that will allow Canny or Threshold transformation to highlight the blobs. then I going to use findContours to find the rectangles.
I've tryed:
Threshold - with different level
blur->erode->erode->grayscale->canny
change image tone with variety of "lines"
and ect. the better result was to detect piece (20-30%) of blob. and this info not allowed to draw rect around blob. also, thanks for shadows, not related to blob dots were detected, so that also prevents to detect the area.
as I understand I need to find counter that has hard contrast (not smooth like in shadow). Is there any way to do that with openCV?
Update
cases separately: image 1, image 2, image 3, image 4, image 5, image 6, image 7, image 8, image 9, image 10, image 11, image 12
One more Update
I believe that the blob have the contrast area at the edge. So, I've tried to make edge stronger: I've created 2 gray scale Mat: A and B, apply Gaussian blur for the second one - B (to reduce noise a bit), then I've made some calculations: goes around every pixel and find max difference between Xi,Yi of 'A' and nearby dots from 'B':
and apply max difference to Xi,Yi. so I get smth like this:
is i'm on the right way? btw, can I reach smth like this via OpenCV methods?
Update Image Denoising helps to reduce noize, Sobel - to highlight the contours, then threshold + findContours and custome convexHull gets smth similar I'm looking for but it not good for some blobs.
Since there are big differences between the input images, the algorithm should be able to adapt to the situation. Since Canny is based on detecting high frequencies, my algorithm treats the sharpness of the image as the parameter used for preprocessing adaptation. I didn't want to spend a week figuring out the functions for all the data, so I applied a simple, linear function based on 2 images and then tested with a third one. Here are my results:
Have in mind that this is a very basic approach and is only proving a point. It will need experiments, tests, and refining. The idea is to use Sobel and sum over all the pixels acquired. That, divided by the size of the image, should give you a basic estimation of high freq. response of the image. Now, experimentally, I found values of clipLimit for CLAHE filter that work in 2 test cases and found a linear function connecting the high freq. response of the input with a CLAHE filter, yielding good results.
sobel = get_sobel(img)
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557
That's the adaptive part. Now for the contours. It took me a while to figure out a correct way of filtering out the noise. I settled for a simple trick: using contours finding twice. First I use it to filter out the unnecessary, noisy contours. Then I continue with some morphological magic to end up with correct blobs for the objects being detected (more details in the code). The final step is to filter bounding rectangles based on the calculated mean, since, on all of the samples, the blobs are of relatively similar size.
import cv2
import numpy as np
def unsharp_mask(img, blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
gaussian = cv2.GaussianBlur(img, (5,5), 0)
return cv2.addWeighted(img, imgWeight, gaussian, gaussianWeight, 0)
def smoother_edges(img, first_blur_size, second_blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
img = cv2.GaussianBlur(img, first_blur_size, 0)
return unsharp_mask(img, second_blur_size, imgWeight, gaussianWeight)
def close_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
def open_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
def shrink_rect(rect, scale = 0.8):
center, (width, height), angle = rect
width = width * scale
height = height * scale
rect = center, (width, height), angle
return rect
def clahe(img, clip_limit = 2.0):
clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=(5,5))
return clahe.apply(img)
def get_sobel(img, size = -1):
sobelx64f = cv2.Sobel(img,cv2.CV_64F,2,0,size)
abs_sobel64f = np.absolute(sobelx64f)
return np.uint8(abs_sobel64f)
img = cv2.imread("blobs4.jpg")
# save color copy for visualizing
imgc = img.copy()
# resize image to make the analytics easier (a form of filtering)
resize_times = 5
img = cv2.resize(img, None, img, fx = 1 / resize_times, fy = 1 / resize_times)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# use sobel operator to evaluate high frequencies
sobel = get_sobel(img)
# experimentally calculated function - needs refining
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557
# don't apply clahe if there is enough high freq to find blobs
if(clip_limit < 1.0):
clip_limit = 0.1
# limit clahe if there's not enough details - needs more tests
if(clip_limit > 8.0):
clip_limit = 8
# apply clahe and unsharp mask to improve high frequencies as much as possible
img = clahe(img, clip_limit)
img = unsharp_mask(img)
# filter the image to ensure edge continuity and perform Canny
# (values selected experimentally, using trackbars)
img_blurred = (cv2.GaussianBlur(img.copy(), (2*2+1,2*2+1), 0))
canny = cv2.Canny(img_blurred, 35, 95)
# find first contours
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# prepare black image to draw contours
canvas = np.ones(img.shape, np.uint8)
for c in cnts:
l = cv2.arcLength(c, False)
x,y,w,h = cv2.boundingRect(c)
aspect_ratio = float(w)/h
# filter "bad" contours (values selected experimentally)
if l > 500:
continue
if l < 20:
continue
if aspect_ratio < 0.2:
continue
if aspect_ratio > 5:
continue
if l > 150 and (aspect_ratio > 10 or aspect_ratio < 0.1):
continue
# draw all the other contours
cv2.drawContours(canvas, [c], -1, (255, 255, 255), 2)
# perform closing and blurring, to close the gaps
canvas = close_image(canvas, (7,7))
img_blurred = cv2.GaussianBlur(canvas, (8*2+1,8*2+1), 0)
# smooth the edges a bit to make sure canny will find continuous edges
img_blurred = smoother_edges(img_blurred, (9,9))
kernel = np.ones((3,3), np.uint8)
# erode to make sure separate blobs are not touching each other
eroded = cv2.erode(img_blurred, kernel)
# perform necessary thresholding before Canny
_, im_th = cv2.threshold(eroded, 50, 255, cv2.THRESH_BINARY)
canny = cv2.Canny(im_th, 11, 33)
# find contours again. this time mostly the right ones
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# calculate the mean area of the contours' bounding rectangles
sum_area = 0
rect_list = []
for i,c in enumerate(cnts):
rect = cv2.minAreaRect(c)
_, (width, height), _ = rect
area = width*height
sum_area += area
rect_list.append(rect)
mean_area = sum_area / len(cnts)
# choose only rectangles that fulfill requirement:
# area > mean_area*0.6
for rect in rect_list:
_, (width, height), _ = rect
box = cv2.boxPoints(rect)
box = np.int0(box * 5)
area = width * height
if(area > mean_area*0.6):
# shrink the rectangles, since the shadows and reflections
# make the resulting rectangle a bit bigger
# the value was guessed - might need refinig
rect = shrink_rect(rect, 0.8)
box = cv2.boxPoints(rect)
box = np.int0(box * resize_times)
cv2.drawContours(imgc, [box], 0, (0,255,0),1)
# resize for visualizing purposes
imgc = cv2.resize(imgc, None, imgc, fx = 0.5, fy = 0.5)
cv2.imshow("imgc", imgc)
cv2.imwrite("result3.png", imgc)
cv2.waitKey(0)
Overall I think that's a very interesting problem, a little bit too big to be answered here. The approach I presented is due to be treated as a road sign, not a complete solution. Tha basic idea being:
Adaptive preprocessing.
Finding contours twice: for filtering and then for the actual classification.
Filtering the blobs based on their mean size.
Thanks for the fun and good luck!
Here is the code I used:
import cv2
from sympy import Point, Ellipse
import numpy as np
x1='C:\\Users\\Desktop\\python\\stack_over_flow\\XsXs9.png'
image = cv2.imread(x1,0)
image1 = cv2.imread(x1,1)
x,y=image.shape
median = cv2.GaussianBlur(image,(9,9),0)
median1 = cv2.GaussianBlur(image,(21,21),0)
a=median1-median
c=255-a
ret,thresh1 = cv2.threshold(c,12,255,cv2.THRESH_BINARY)
kernel=np.ones((5,5),np.uint8)
dilation = cv2.dilate(thresh1,kernel,iterations = 1)
kernel=np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(dilation, cv2.MORPH_OPEN, kernel)
cv2.imwrite('D:\\test12345.jpg',opening)
ret,contours,hierarchy = cv2.findContours(opening,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
c=np.size(contours[:])
Blank_window=np.zeros([x,y,3])
Blank_window=np.uint8(Blank_window)
for u in range(0,c-1):
if (np.size(contours[u])>200):
ellipse = cv2.fitEllipse(contours[u])
(center,axes,orientation) =ellipse
majoraxis_length = max(axes)
minoraxis_length = min(axes)
eccentricity=(np.sqrt(1-(minoraxis_length/majoraxis_length)**2))
if (eccentricity<0.8):
cv2.drawContours(image1, contours, u, (255,1,255), 3)
cv2.imwrite('D:\\marked.jpg',image1)
Here problem is to find a near circular object. This simple solution is based on finding the eccentricity for each and every contour. Such objects being detected is the drop of water.
I have a partial solution in place.
FIRST
I initially converted the image to the HSV color space and tinkered with the value channel. On doing so I came across something unique. In almost every image, the droplets have a tiny reflection of light. This was highlighted distinctly in the value channel.
Upon inverting this I was able to obtain the following:
Sample 1:
Sample 2:
Sample 3:
SECOND
Now we have to extract the location of those points. To do so I performed anomaly detection on the inverted value channel obtained. By anomaly I mean the black dot present in them.
In order to do this I calculated the median of the inverted value channel. I allotted pixel value within 70% above and below the median to be treated as normal pixels. But every pixel value lying beyond this range to be anomalies. The black dots fit perfectly there.
Sample 1:
Sample 2:
Sample 3:
It did not turn out well for few images.
As you can see the black dot is due to the reflection of light which is unique to the droplets of water. Other circular edges might be present in the image but the reflection distinguishes the droplet from those edges.
THIRD
Now since we have the location of these black dots, we can perform Difference of Gaussians (DoG) (also mentioned in the update of the question) and obtain relevant edge information. If the obtained location of the black dots lie within the edges discovered it is said to be a water droplet.
Disclaimer: This method does not work for all the images. You can add your suggestions to this.
Good day , I am working on this subject and my advice to you is; First, after using many denoising filters such as Gaussian filters, process the image after that.
You can blob-detection these circles not with countors.