screenshot of expected size on existing selenium test cases? - selenium

I have a Chrome selenium project which is created in python, and I am trying to add screenshot feature with the existing test cases.
Details:
In my environment.py file, I have the centralized driver defined with such options.
options.add_argument("--headless")
options.add_argument("--no-sandbox")
options.add_argument("--disable-setuid-sandbox")
# options.add_argument("--window-size=1920,1080")
options.add_argument("--disable-gpu")
driver = webdriver.Chrome(context.chromedriver_path, chrome_options=options)
2 here is my centralized screen_shot function call:
def get_screenshot(self, filename=None):
if filename is None:
file_name = ....
full_path = "{}/{}.{}".format(self.results_folder, filename, "png")
self.logger.info("Saving screenshot to %s" % full_path)
self.driver.save_screenshot(full_path)
allure.attach(
self.driver.get_screenshot_as_png(),
name=full_path,
attachment_type=AttachmentType.PNG,
)
And here is one of my individual test cases that is to make a screenshot using this function.
def step_when_the_user_to_verify_the_created_profile(context):
context.navigationbar.clickProfileLink()
context.screenshot.save()
context.blueprintdetailpage.clickAddingPlan()
context.screenshot.save()
Everything works, but the resolution of the saved screenshot is always of 800 x 600, while I am looking for is of 1920 X 1080. If I was to set the resolution of 1920 X 1080 as the extra driver option, which is the line I have currently commented in the environment.py file, then I guess all my test cases will be broken.
So my question is how to keep my existing test cases as they are, and at the same time, I can generate the screenshot of size 1920 X 1080 ?
Thanks for the tips.
Chun

Related

Template Matching through python API on Linux desktop

I'm following the tutorial on using your own template images to do object 3D pose tracking, but I'm trying to get it working on Ubuntu 20.04 with a live webcam stream.
I was able to successfully make my index .pb file with extracted KNIFT features from my custom images.
It seems the next thing to do is load the provided template matching graph (in mediapipe/graphs/template_matching/template_matching_desktop.pbtxt) (replacing the index_proto_filename of the BoxDetectorCalculator with my own index file), and run it on a video input stream to track my custom object.
I was hoping that would be easiest to do in python, but am running into dependency problems.
(I installed mediapipe python with pip3 install mediapipe)
First, I couldn't find how to directly load a .pbtxt file as a graph in the mediapipe python API, but that's ok. I just load the text it contains and use that.
template_matching_graph_filepath=os.path.abspath("~/mediapipe/mediapipe/graphs/template_matching/template_matching_desktop.pbtxt")
graph = mp.CalculatorGraph(graph_config=open(template_matching_graph_filepath).read())
But I get missing calculator targets.
No registered object with name: OpenCvVideoDecoderCalculator; Unable to find Calculator "OpenCvVideoDecoderCalculator"
or
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:309] Error parsing text-format mediapipe.CalculatorGraphConfig: 54:70: Could not find type "type.googleapis.com/mediapipe.TfLiteInferenceCalculatorOptions" stored in google.protobuf.Any.
It seems similar to this troubleshooting case but, since I'm not trying to compile an application, I'm not sure how to link in the missing calculators.
How to I make the mediapipe python API aware of these graphs?
UPDATE:
I made decent progress by adding the graphs that the template_matching depends on to the cc_library deps of the mediapipe/python/BUILD file
cc_library(
name = "builtin_calculators",
deps = [
"//mediapipe/calculators/image:feature_detector_calculator",
"//mediapipe/calculators/image:image_properties_calculator",
"//mediapipe/calculators/video:opencv_video_decoder_calculator",
"//mediapipe/calculators/video:opencv_video_encoder_calculator",
"//mediapipe/calculators/video:box_detector_calculator",
"//mediapipe/calculators/tflite:tflite_inference_calculator",
"//mediapipe/calculators/tflite:tflite_tensors_to_floats_calculator",
"//mediapipe/calculators/util:timed_box_list_id_to_label_calculator",
"//mediapipe/calculators/util:timed_box_list_to_render_data_calculator",
"//mediapipe/calculators/util:landmarks_to_render_data_calculator",
"//mediapipe/calculators/util:annotation_overlay_calculator",
...
I also modified solution_base.py so it knows about BoxDetector's options.
from mediapipe.calculators.video import box_detector_calculator_pb2
...
CALCULATOR_TO_OPTIONS = {
'BoxDetectorCalculator':
box_detector_calculator_pb2
.BoxDetectorCalculatorOptions,
Then I rebuilt and installed mediapipe python from source with:
~/mediapipe$ python3 setup.py install --link-opencv
Then I was able to make my own class derived from SolutionBase
from mediapipe.python.solution_base import SolutionBase
class ObjectTracker(SolutionBase):
"""Process a video stream and output a video with edges of templates highlighted."""
def __init__(self,
object_knift_index_file_path):
super().__init__(binary_graph_path=object_pose_estimation_binary_file_path,
calculator_params={"BoxDetector.index_proto_filename": object_knift_index_file_path},
)
def process(self, image: np.ndarray) -> NamedTuple:
return super().process(input_data={'input_video':image})
ot = ObjectTracker(object_knift_index_file_path="/path/to/my/object_knift_index.pb")
Finally, I process a video frame from a cv2.VideoCapture
cv_video = cv2.VideoCapture(0)
result, frame = cv_video.read()
input_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
res = ot.process(image=input_frame)
So close! But I run into this error which I just don't know what to do with.
/usr/local/lib/python3.8/dist-packages/mediapipe/python/solution_base.py in process(self, input_data)
326 if data.shape[2] != RGB_CHANNELS:
327 raise ValueError('Input image must contain three channel rgb data.')
--> 328 self._graph.add_packet_to_input_stream(
329 stream=stream_name,
330 packet=self._make_packet(input_stream_type,
RuntimeError: Graph has errors:
Calculator::Open() for node "BoxDetector" failed: ; Error while reading file: /usr/local/lib/python3.8/dist-packages/
Looks like CalculatorNode::OpenNode() is trying to open the python API install path as a file. Maybe it has to do with the default_context. I have no idea where to go from here. :(

Pyinstaller, Multiprocessing, and Pandas - No such file/directory [duplicate]

Python v3.5, Windows 10
I'm using multiple processes and trying to captures user input. Searching everything I see there are odd things that happen when using input() with multiple processes. After 8 hours+ of trying, nothing I implement worked, I'm positive I am doing it wrong but I can't for the life of me figure it out.
The following is a very stripped down program that demonstrates the issue. Now it works fine when I run this program within PyCharm, but when I use pyinstaller to create a single executable it fails. The program constantly is stuck in a loop asking the user to enter something as shown below:.
I am pretty sure it has to do with how Windows takes in standard input from things I've read. I've also tried passing the user input variables as Queue() items to the functions but the same issue. I read you should put input() in the main python process so I did that under if __name__ = '__main__':
from multiprocessing import Process
import time
def func_1(duration_1):
while duration_1 >= 0:
time.sleep(1)
print('Duration_1: %d %s' % (duration_1, 's'))
duration_1 -= 1
def func_2(duration_2):
while duration_2 >= 0:
time.sleep(1)
print('Duration_2: %d %s' % (duration_2, 's'))
duration_2 -= 1
if __name__ == '__main__':
# func_1 user input
while True:
duration_1 = input('Enter a positive integer.')
if duration_1.isdigit():
duration_1 = int(duration_1)
break
else:
print('**Only positive integers accepted**')
continue
# func_2 user input
while True:
duration_2 = input('Enter a positive integer.')
if duration_2.isdigit():
duration_2 = int(duration_2)
break
else:
print('**Only positive integers accepted**')
continue
p1 = Process(target=func_1, args=(duration_1,))
p2 = Process(target=func_2, args=(duration_2,))
p1.start()
p2.start()
p1.join()
p2.join()
You need to use multiprocessing.freeze_support() when you produce a Windows executable with PyInstaller.
Straight out from the docs:
multiprocessing.freeze_support()
Add support for when a program which uses multiprocessing has been frozen to produce a Windows executable. (Has been tested with py2exe, PyInstaller and cx_Freeze.)
One needs to call this function straight after the if name == 'main' line of the main module. For example:
from multiprocessing import Process, freeze_support
def f():
print('hello world!')
if __name__ == '__main__':
freeze_support()
Process(target=f).start()
If the freeze_support() line is omitted then trying to run the frozen executable will raise RuntimeError.
Calling freeze_support() has no effect when invoked on any operating system other than Windows. In addition, if the module is being run normally by the Python interpreter on Windows (the program has not been frozen), then freeze_support() has no effect.
In your example you also have unnecessary code duplication you should tackle.

Colab - Split big tiff file using GDAL

I'm trying to split a huge tiff file into tiles using gdal on Colab.
My google drive is mounted and I can read and write from / into it.
The code is taken from this answer:
com_string = "gdal_translate -of GTIFF -srcwin" + ...
os.system(com_string)
The cell completes but no new files show up on the drive.
Any ideas or another way to achieve the splitting of the file?
This answer gives a suggestion:
You would need to come up with the pixel/line locations or corner coordinates and then loop over the values with gdal_translate.
import os, sys
from osgeo import gdal
dset = gdal.Open(sys.argv[1])
width = dset.RasterXSize
height = dset.RasterYSize
print width, 'x', height
tilesize = 5000
for i in range(0, width, tilesize):
for j in range(0, height, tilesize):
w = min(i+tilesize, width) - i
h = min(j+tilesize, height) - j
gdaltranString = "gdal_translate -of GTIFF -srcwin "+str(i)+", "+str(j)+", "+str(w)+", " \
+str(h)+" " + sys.argv[1] + " " + sys.argv[2] + "_"+str(i)+"_"+str(j)+".tif"
os.system(gdaltranString)
This, of course, relies on your gdal installation working properly. If the above doesn't work (you get no files still) try first running it in a location that isn't a mounted Google Drive. If that works, you know the problem is with your mounting. If not, next I would check to make sure you are getting the input image you are expecting with something like plt.imshow(your_source_image). If you see the image, then move on. If not, your source image is either missing or the path is wrong.
If it still doesn't work, I'd suspect it is a problem with your gdal installation. In that case I would first try a very simple function and make sure it gives the result you expect. You can also try running something else on Collab and make sure that works.

How to get a photo from tello drone using the sdk

The Tello sdk has the streamon and streamoff commands. These start and end the 720p video stream. Is there a way to get a 5 megapixel image through the sdk, like when you put the control app into photo mode? I'm guessing if such a way exists, it is not documented.
while there isn't a UDP command that you can send to the drone to switch, there is some python code that you could try in order to make it work. and to make something like that work easily without coding it over again putting it in a function and calling it like "takePhoto()" is the best solution I have. Here's some code:
# # simple ex_4: Tello_photo
# <\>
from djitellopy import tello
import cv2
me = tello.Tello()
me.connect()
print(me.get_battery())
def TakePhoto(me):
me.streamoff()
me.streamon()
img = me.get_frame_read().frame
# print(img)
# img = cv2.resize(img, (360, 240)) # optional
cv2.imshow("photo", img) # optional display photo
cv2.waitKey(0)
# # example condition, it can also just be called normally like "TakePhoto"
"""
if (1 + 1) == 2:
TakePhoto(me)
"""
# # another example, with an actual used if statement
"""
bat = me.get_battery()
if bat < 50:
TakePhoto(me)
else:
print("battery too low (for ex)")
"""
# # for now, we'll call it just like this for testing:
TakePhoto(me)
# </>
the code was tested on windows 10 - pycharm 2021.2 > python3.x >> opencv 4.5.1.48
I hope this helped and I hope it works for you.

error: (-215:Assertion failed) !_src.empty() in function 'cvtColor' while using OpenCV 4.2 with swift [duplicate]

I am trying to do a basic colour conversion in python however I can't seem to get past the below error. I have re-installed python, opencv and tried on both python 3.4.3 (latest) and python 2.7 (which is on my Mac).
I installed opencv using python's package manager opencv-python.
Here is the code that fails:
frame = cv2.imread('frames/frame%d.tiff' % count)
frame_HSV= cv2.cvtColor(frame,cv2.COLOR_RGB2HSV)
This is the error message:
cv2.error: OpenCV(3.4.3) /Users/travis/build/skvark/opencv-python/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
This error happened because the image didn't load properly. So you have a problem with the previous line cv2.imread. My suggestion is :
check if the image exists in the path you give
check if the count variable has a valid number
If anyone is experiencing this same problem when reading a frame from a webcam:
Verify if your webcam is being used on another task and close it. This wil solve the problem.
I spent some time with this error when I realized my camera was online in a google hangouts group. Also, Make sure your webcam drivers are up to date
I kept getting this error too:
Traceback (most recent call last):
File "face_detector.py", line 6, in <module>
gray_img=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor
My cv2.cvtColor(...) was working fine with \photo.jpg but not with \news.jpg. For me, I finally realized that when working on Windows with python, those escape characters will get you every time!! So my "bad" photo was being escaped because of the file name beginning with "n". Python took the \n as an escape character and OpenCV couldn't find the file!
Solution:
Preface file names in Windows python with r"...\...\" as in
cv2.imread(r".\images\news.jpg")
If the path is correct and the name of the image is OK, but you are still getting the error
use:
from skimage import io
img = io.imread(file_path)
instead of:
cv2.imread(file_path)
The function imread loads an image from the specified file and returns
it. If the image cannot be read (because of missing file, improper permissions, unsupported or invalid format), the function returns an empty matrix ( Mat::data==NULL ).
check if the image exists in the path and verify the image extension (.jpg or .png)
Check whether its the jpg, png, bmp file that you are providing and write the extension accordingly.
Another thing which might be causing this is a 'weird' symbol in your file and directory names. All umlaut (äöå) and other (éóâ etc) characters should be removed from the file and folder names. I've had this same issue sometimes because of these characters.
Most probably there is an error in loading the image, try checking directory again.
Print the image to confirm if it actually loaded or not
In my case, the image was incorrectly named. Check if the image exists and try
import numpy as np
import cv2
img = cv2.imread('image.png', 0)
cv2.imshow('image', img)
I've been in same situation as well, and My case was because of the Korean letter in the path...
After I remove Korean letters from the folder name, it works.
OR put
[#-*- coding:utf-8 -*-]
(except [ ] at the edge)
or something like that in the first line to make python understand Korean or your language or etc.
then it will work even if there is some Koreans in the path in my case.
So the things is, it seems like there is something about path or the letter.
People who answered are saying similar things. Hope you guys solve it!
I had the same problem and it turned out that my image names included special characters (e.g. château.jpg), which could not bet handled by cv2.imread. My solution was to make a temporary copy of the file, renaming it e.g. temp.jpg, which could be loaded by cv2.imread without any problems.
Note: I did not check the performance of shutil.copy2 vice versa other options. So probably there is a better/faster solution to make a temporary copy.
import shutil, sys, os, dlib, glob, cv2
for f in glob.glob(os.path.join(myfolder_path, "*.jpg")):
shutil.copy2(f, myfolder_path + 'temp.jpg')
img = cv2.imread(myfolder_path + 'temp.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
os.remove(myfolder_path + 'temp.jpg')
If there are only few files with special characters, renaming can also be done as an exeption, e.g.
for f in glob.glob(os.path.join(myfolder_path, "*.jpg")):
try:
img = cv2.imread(f)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
except:
shutil.copy2(f, myfolder_path + 'temp.jpg')
img = cv2.imread(myfolder_path + 'temp.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
os.remove(myfolder_path + 'temp.jpg')
In my case it was a permission issue. I had to:
chmod a+wrx the image,
then it worked.
must please see guys that the error is in the cv2.imread() .Give the right path of the image. and firstly, see if your system loads the image or not. this can be checked first by simple load of image using cv2.imread().
after that ,see this code for the face detection
import numpy as np
import cv2
cascPath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site- packages/cv2/data/haarcascade_frontalface_default.xml"
eyePath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site-packages/cv2/data/haarcascade_eye.xml"
smilePath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site-packages/cv2/data/haarcascade_smile.xml"
face_cascade = cv2.CascadeClassifier(cascPath)
eye_cascade = cv2.CascadeClassifier(eyePath)
smile_cascade = cv2.CascadeClassifier(smilePath)
img = cv2.imread('WhatsApp Image 2020-04-04 at 8.43.18 PM.jpeg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here, cascPath ,eyePath ,smilePath should have the right actual path that's picked up from lib/python3.7/site-packages/cv2/data here this path should be to picked up the haarcascade files
Your code can't find the figure or the name of your figure named the by error message.
Solution:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img=cv2.imread('哈哈.jpg')#solution:img=cv2.imread('haha.jpg')
print(img)
If anyone is experiencing this same problem when reading a frame from a webcam [with code similar to "frame = cv2.VideoCapture(0)"] and work in Jupyter Notebook, you may try:
ensure previously tried code is not running already and restart Jupyter Notebook kernel
SEPARATE code "frame = cv2.VideoCapture(0)" in separate cell on place where it is [previous code put in cell above, code under put to cell down]
then run all the code above cell where is "frame = cv2.VideoCapture(0)"
then try run next cell with its only code "frame = cv2.VideoCapture(0)" - AND - till you will continue in executing other cells - ENSURE - that ASTERIX on the left side of this particular cell DISAPEAR and command order number appear instead - only then continue
now you can try execute the rest of your code as your camera input should not be empty anymore :-)
After end, ensure you close all your program and restart kernel to prepare it for another run
As #shaked litbak , this error arised with my initial use with the ASCII-generator , as i naively thought i just had to add to the ./data directory , with its load automatically .
I had to append the --input option with the desired file path .
I checked my image file path and it was correct. I made sure there was no corrupt images.The problem was with my mac. It sometimes have a hidden file called .DS_Store which was saved together with the image file path. Therefore cv2 was having a problem with that file.So I solved the problem by deleting .DS_Store
I also encountered this type of error:
error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
The solution was to load the image properly. Since the file mentioned was wrong, images were not loaded and hence it threw this error. You can check the path of the image or if uploading an image through colab or drive, make sure that the image is present in the drive.
I encounter the problem when I try to load the image from non-ASCII path.
If I simply use imread to load the image, I am only able to get None.
Here is my solution:
import cv2
import numpy as np
path = r'D:\map\上海地图\abc.png'
image = cv2.imdecode(np.fromfile(path, dtype=np.uint8), cv2.IMREAD_UNCHANGED)
Similar thing will happen when I save the image in a non-ASCII path. It will not be successfully saved without any warnings. And here is what I did.
import cv2
import numpy as np
path = r'D:\map\上海地图\abc.png'
cv2.imencode('.png', image)[1].tofile(path)
path = os.path.join(raw_folder, folder, file)
print('[DEBUG] path:', path)
img = cv2.imread(path) #read path Image
if img is None: # check if the image exists in the path you give
print('Wrong path:', path)
else: # It completes the steps
img = cv2.resize(img, dsize=(128,128))
pixels.append(img)
The solution os to ad './' before the name of image before reading it...
Just Try Degrading the OpenCV
in python Shell (in cmd)
>>> import cv2
>>> cv2.__version__
after Checking in cmd
pip uninstall opencv-python
after uninstalling the version of opencv install
pip install opencv-python==3.4.8.29