Error-1074396080 Occured at IMAQ Extract Single Color Plane, Pssible reason(s): Invalid image type - labview

Error-1074396080 Occured at IMAQ Extract Single Color Plane, Possible reason(s): Invalid image type
I want to take pictures in realtime using 2 cameras (camera A and Camera B) in LabView
Each camera will capture images with a Field of View (FOV) of 6 cm (width).
For example camera A will take several pictures, then the pictures are combined (stiching together).
The problem I'm facing is, when the program (UI) is running, an error appears: Error-1074396080 Occured at IMAQ Extract Single Color Plane, Possible reason(s): Invalid image type.
I have attached the VI.
Please help to solve this problem.

Related

How to get the Number of Drawn Triangles reliably in Unreal?

I am desperately looking for a way to reliably get the average number of triangles drawn during a selected time window in UE 5.1. When I use command line tools for that I get weird numbers. For example, for a scene with a single object that has 220 triangles, RHI Triangles Drawn gives an average of 1607.5 over a 10 second window. I think Unreal includes the number of triangles to render the text it prints on screen (the text on upper left in the image below) which appears when I use command Stat StartFile
I also tried Unreal Insights but I couldn’t find a way to get an average count of triangles drawn over a period from that.
Any ideas to reliably get it?

Pose tracking with mediapipe with two cameras and receive the same coordinates for corresponding points in both images

I'm fairly new to computer vision and currently trying to make the following happen, but had no success so far.
My situation: I want to track different landmarks of a person with mediapipe. Tracking with a single camera works fine and also tracking with two cameras at the same time. What I want is to receive the same coordinate from each camera for a point that has been detected. For example: Camera 1 found the landmark of the left shoulder with the x and y coordinates (1,2). For camera 2 the same landmark has obviously different coordinates lets say (2,3). Ideally there is a way to map or transform the coordinates of camera 2 to camera 1.
the following picture shows the camera setup
Camera setup (I can't post images yet)
So far I've tried to use stereo camera calibration as described here: https://temugeb.github.io/opencv/python/2021/02/02/stereo-camera-calibration-and-triangulation.html. But this doesn't seem to do the trick. I receive a rotation and translation matrix as an output from the calibration, but when I concatenate them to a transformation matrix and multiply it with the coordinates of camera 2, the results don't match with the coordinates of camera 1.
I've also tried to implement planar homography, but since the observed scene isn't limited to a plane, it isn't working well.
The idea behind this is to increase to probability that the landmarks will be detected and use both camera streams to build a full set of coordinates for all desired landmarks.
Is it possible to do what I want to do? If so, what's the name of this process?
I'm really grateful for any help. Thanks in advance.

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

how to detect an image with iPhone camera?

I am trying to detect an image of a chocolate wrapper with iPhone/iPad Camera.
I already got the image frames from video using avcapture.
But i can't figure out how to detect a specific part of the image?
Note:- the specific part i am mentioning is a pink ribbon which will always be the same.
can i match the image? if yes how? or should i get bitmap pixel data and match the unique color codes (but they can vary depending on light conditions and angle at which image is taken)?
Try the following two APIs:
http://www.iqengines.com;
http://intopii.com
This Solution is for the scenario where you are trying to detect specific part of the image with a specific colour (This part will be called Reference image). so we are going to match the colour codes of our reference image with the one taken from the camera in real time.
Step 1.
Create a media layer which will scan the particular part of the image taken from camera. This will narrow down the search area for colour codes and make the process faster, as there are thousands of colour codes in the image.
Step 2.
Get the colour codes from the image we just scanned.Create an array of the colour codes.
Step 3.
Now repeat the step 2 for 3 or 4 different light conditions.Because some colour codes change under different types of lights. Match the colour codes array to get the common colour codes.(These will be referred now on as Reference colour codes) These colour codes will be there for most of the cases.
Step 4.
Convert the image taken from camera in a pixel buffer.(Basically a collection of pixels). Now convert the pixel buffer into colour codes and compare them with reference colour codes.If all of them or more than 50% are matched. you have potential match.
Note:- i Followed this process and it works like magic. it sounds more complicated than it is and it is enough robust to help you get it working in few hours. In my case the wrapper i was scanning was squeezed and tested under different lights(sun,white CFL lamp, a yellow light from the bulb, and taken from different angles and distances). Still most of the reference colour codes were always there.
Gud luck!

Camera image size

I am writing a Cocoa application for mac osx. I'm trying to figure out how to determine the size of an image that will be captured by a camera? I would like to know the size of the image that will be captured so I can setup a view with an aspect ratio that won't distort the image. For example, if my view is defined to be 640x360 and my camera captures images that are 640x480, the displayed image looks short and fat. I'm also displaying some other layers over the image and I need the image size to be able to scale and position the layers properly.
I won't know the type of camera that is attached until run-time so I'd like to be able to interrogate the device and get attributes like image size. Thanks for the help...
You are altering the aspect ratio of the image when you capture in 640x360 instead of 640x480 or 320x240. You are doing something similar as a resize, using the whole image and making it a different size.
If you don't want to distort the image, but use only a portion of it you need to do a crop. Some hardware support cropping, others don't and you have to do it in software. Cropping is using only portions of the original image. In your case, you would discard the bottom 120 lines.
Example (from here):
The blue rectangle is the natural, or original image and the red is a crop of it.