Centralize text in image outputs error - numpy

Below code is not centralizing text no error in code, but i want to centralize text.
import os
unicode_text = u"\u0627\u0628\u067E"
list_of_letters = list (unicode_text)
char = u''.join(word)
t1 = arabic_reshaper.reshape(char)
W,H= (32, 32)
img= PIL.Image.new('RGBA', (W, H), (255, 255, 255),)
draw = PIL.ImageDraw.Draw(img)
font = PIL.ImageFont.truetype( r"C:\Downloads\arabic.ttf", 15)
t2 = get_display(t1)
w, h = draw.textsize(t2.encode('utf-8'))
draw.text(((W-w)/2,(H-h)/2), t2, fill="#000000", font=font)

Your code does not center correctly because it does not retrieve the actual character width and height. You can see that if you print out the character sizes that textsize returns and then change the font size. You still get the same character sizes!
Why does it not change? Because you load a font but then don't use it for measuring. If you set it inside the draw object, or add font=font to both draw.textsize and draw.text, it works as expected.
(Just doing that gives an error on the original textsize line; possibly you attempted to fix the issue in an unrelated way by adding .encode('utf8). But that is not necessary.)
draw = PIL.ImageDraw.Draw(img)
draw.font = PIL.ImageFont.truetype( "times.ttf", 48)
t2 = get_display(t1)
w, h = draw.textsize(t2)
draw.text(((W-w)/2,(H-h)/2), t2, fill="#000000")
print ("char: %04X w %d h %d" % (ord(char),w,h))
This results in correctly centered characters throughout, the same for both Latin and Arabic letters.

Related

Calculating the size of an object using opencv and numpy poly1d

I'm looking to use a small numpy array to generate a curve that I can use to predict the height measurement at non-known points. I have several points that I am using to create a poly1d. I know it's possible, we use software that does it just fine at work, and when I used a different image as a tester, plugging the values into Excel and getting the polynomial, it worked fine, but I'm getting pretty drastic measurements on a different calibratable image, I get drastically different results.
Here is the image that I'm trying to measure.
The stick on the front of the pole contains known measurements. From bottom to top, they are 3'6" (42"), 6'6" (78"), 9' 8" (116"), 13' (156)
The picture has been through opencv undistort with a calibrated camera.
This is the function that actually performs the logic. x and y are gathered by cv2 EVENT_LBUTTONUP, and sent to this function.
Checking the lengths of the array is just to help me figure out why this isn't working, trying to generate a line to show the curve fit.
dist = self.firstClick-y
self.yData.append(dist)
if len(self.yData) > 4:
print(self.poly(dist))
if len(self.yData) == 4:
array = np.array(self.xData)
array = np.expand_dims(array, axis=0)
print(self.xData)
print(self.yData)
array=np.append(array, [self.yData], axis=0)
print(array)
x = array[:,0]
y = array[:,1]
self.poly = np.poly1d(np.polyfit(x, y, 2))
poly1d = np.poly1d(self.poly)
xp = np.linspace(-2, 20, 1)
_ = plt.plot(x, y, '.', xp, self.poly(xp), '-', xp, self.poly(xp), '--')
plt.ylim(0,200)
plt.show()
When I run this code, my values tend to quickly go into the tens of thousands when I'm attempting to collect the measurement at 18' 11", (the lowest wire).
Any help would be appreciated, I've been up all night trying to fit this curve.
Edit:
Sorry, I should have included the code used to display and scale the image.
self.img = cv2.imread(imagePath, cv2.IMREAD_ANYCOLOR)
self.scale_percent = 30
self.width = int(self.img.shape[1] * self.scale_percent/100)
self.height = int(self.img.shape[0] * self.scale_percent/100)
dsize = (self.width, self.height)
self.output = cv2.resize(self.img, dsize)
img = self.output
cv2.imshow('image', img)
cv2.setMouseCallback('image', self.click_event)
cv2.waitKey()
I just called this function to display the image and the below code to calibrate the values.
if self.firstClick == 0:
self.firstClick = y
cv2.putText(self.output, "Pole Base", (x, y), font, 1, (255, 255, 0), 2)
cv2.imshow('image', self.output)
elif self.firstClick != 0 and self.secondClick == 0:
self.secondClick = y
print("The difference in first and second clicks is", self.firstClick - self.secondClick)
first = self.firstClick - self.secondClick
inch = first/42
foot = inch*12
self.foot = foot
print("One foot is currently: ", foot)
self.firstLine = 3.5*12
self.secondLine = 6.5*12
self.thirdLine = 9.67*12
self.fourthLine = 13*12
self.xData = np.array([self.firstLine, self.secondLine, self.thirdLine, self.fourthLine])
self.yData.append(self.firstLine)
print(self.firstLine)
print(self.secondLine)
print(self.thirdLine)
print(self.fourthLine)

Tesseract and multiple line license plates: How can I get characters from a two line license plate?

i tried getting individual characters from the image and passing them through the ocr, but the result is jumbled up characters. Passing the whole image is at least returning the characters in order but it seems like the ocr is trying to read all the other contours as well.
example image:
Image being used
The result : 6A7J7B0
Desired result : AJB6779
The code
img = cv2.imread("data/images/car6.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# resize image to three times as large as original for better readability
gray = cv2.resize(gray, None, fx = 3, fy = 3, interpolation = cv2.INTER_CUBIC)
# perform gaussian blur to smoothen image
blur = cv2.GaussianBlur(gray, (5,5), 0)
# threshold the image using Otsus method to preprocess for tesseract
ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)
# create rectangular kernel for dilation
rect_kern = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
# apply dilation to make regions more clear
dilation = cv2.dilate(thresh, rect_kern, iterations = 1)
# find contours of regions of interest within license plate
try:
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
except:
ret_img, contours, hierarchy = cv2.findContours(dilation, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# sort contours left-to-right
sorted_contours = sorted(contours, key=lambda ctr: cv2.boundingRect(ctr)[0])
# create copy of gray image
im2 = gray.copy()
# create blank string to hold license plate number
plate_num = ""
# loop through contours and find individual letters and numbers in license plate
for cnt in sorted_contours:
x,y,w,h = cv2.boundingRect(cnt)
height, width = im2.shape
# if height of box is not tall enough relative to total height then skip
if height / float(h) > 6: continue
ratio = h / float(w)
# if height to width ratio is less than 1.5 skip
if ratio < 1.5: continue
# if width is not wide enough relative to total width then skip
if width / float(w) > 15: continue
area = h * w
# if area is less than 100 pixels skip
if area < 100: continue
# draw the rectangle
rect = cv2.rectangle(im2, (x,y), (x+w, y+h), (0,255,0),2)
# grab character region of image
roi = thresh[y-5:y+h+5, x-5:x+w+5]
# perfrom bitwise not to flip image to black text on white background
roi = cv2.bitwise_not(roi)
# perform another blur on character region
roi = cv2.medianBlur(roi, 5)
try:
text = pytesseract.image_to_string(roi, config='-c tessedit_char_whitelist=0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ --psm 8 --oem 3')
# clean tesseract text by removing any unwanted blank spaces
clean_text = re.sub('[\W_]+', '', text)
plate_num += clean_text
except:
text = None
if plate_num != None:
print("License Plate #: ", plate_num)
For me psm mode 11 worked able to detect single line and multi as well
pytesseract.image_to_string(img, lang='eng', config='--oem 3 --psm 11').replace("\n", ""))
11 Sparse text. Find as much text as possible in no particular order.
If you want to extract license plate number from two rows you can replace following line:
sorted_contours = sorted(contours, key=lambda ctr: cv2.boundingRect(ctr)[0] + cv2.boundingRect(ctr)[1] * img.shape[1] )
with
sorted_contours = sorted(contours, key=lambda ctr: cv2.boundingRect(ctr)[0])

how to fix this issue ? cv2.error: OpenCV(4.1.2) ... error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize'

I am trying to do rotation on multiple images in a folder but I am having this error when I put values of fx, fy greater than 0.2 in the resize function
(cv2.error: OpenCV(4.1.2) ... error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize')
Although, when I try to rotate a single image and put values of fx and fy equal to 0.5, it works perfectly fine.
Is there a way to fix this issue because it is very hectic to augment images one by one? Plus the multiple images which are rotated by the code attached here, with fx and fy values equal to 0.2, have undesirable dimensions i.e the photos are very small and their quality is also reduced.
the part of code for rotation of multiple images is given below:
for imag in os.listdir(source_folder):
img = cv2.imread(os.path.join(source_folder,imag))
img = cv2.resize(img, (0,0), fx=0.5, fy=0.5)
width = img.shape[1]
height = img.shape[0]
M = cv2.getRotationMatrix2D((width/2,height/2),5,1.0)
rotated_img = cv2.warpAffine(img,M,(img.shape[1],img.shape[0]))
cv2.imwrite(os.path.join(destination_right_folder, "v4rl" + str(a) + '.jpg') , rotated_img)
#cv2.imshow("rotated_right",rotated_img)
#cv2.waitKey(0)
a += 1
Add a check after you read the image to see if it is None:
img = cv2.imread(os.path.join(source_folder,imag))
if img is None: continue
The error is happening when you call the cv2.resize() function. Maybe files are being read that are not images.

OpenCV Error using function matchTemplate

While using the matchTemplate function in OpenCV, I get the error that the template image is larger than the original image. How to overcome that?
The code is as follows:
def imagecheck(name1):
os.chdir('/content/drive/My Drive/Mad Street Den/Images')
main_image = cv2.imread('image_name_100.jpg')
gray_image = cv2.cvtColor(main_image, cv2.COLOR_BGR2GRAY)
#open the template as gray scale image
os.chdir('/content/drive/My Drive/Mad Street Den/Crops')
template = cv2.imread(name1, 0)
width, height = template.shape[::-1] #get the width and height
#match the template using cv2.matchTemplate
match = cv2.matchTemplate(gray_image, template, cv2.TM_CCOEFF_NORMED)
threshold = 0.9
position = np.where(match >= threshold) #get the location of template in the image
for point in zip(*position[::-1]): #draw the rectangle around the matched template
cv2.rectangle(main_image, point, (point[0] + width, point[1] + height), (0, 204, 153), 2)
#result=[position[1][0],position[0][0],position[0][1],position[0][2]]
result=[]
if (all (position)):
result.append(int(position[1]))
result.append(int(position[0]))
result.append(int(position[1]+width))
result.append(int(position[0]+height))
return (result)
#cv2_imshow(main_image)
for i in range(0,273):
name1='image_name_'+str(i)+'.jpg'
result=imagecheck(name1)
print(name1, ' : ',result)
The Error is
error: OpenCV(3.4.3) /io/opencv/modules/imgproc/src/templmatch.cpp:1107: error: (-215:Assertion failed) _img.size().height <= _templ.size().height && _img.size().width <= _templ.size().width in function 'matchTemplate' site:stackoverflow.com
You can avoid the issue by not attempting to match a template against an image if the template is larger. Compare the template dimensions to the image dimensions and, in this case, return [] if they template is larger in any dimension.

How to resize font in plot_net feature of phyloseq?

I want to resize my text in plot_net but none of the options are working for me. I am trying
p <- plot_net(physeqP, maxdist = 0.4, point_label = "ID", color = "Cond", shape = "Timeperiod") p + geom_text(size=15)
This gives me error
"Error: geom_text requires the following missing aesthetics: x, y,
label".
Can anyone please tell me how can I fix the issue?
I dont want to resize legends or the axis, but the nodes text.
this image is drawn using phyloseq but since the font size is very small, i want to make it prominent.
Without an example it's hard to reproduce.
p <- plot_net(physeqP, maxdist = 0.4, point_label = "ID"
, color = "Cond", shape = "Timeperiod", cex_val = 2)
I believe this is with the NeuralNetTools package.
Try using: cex_val numeric value indicating size of text labels, default 1
https://www.rdocumentation.org/packages/NeuralNetTools/versions/1.5.0/topics/plotnet