Edge detection then convert into a shape? - shapes

I was wondering if anyone knows of an algorithm to retrieve shapes from an edge detection filter.
I basically want process an image and find all the shapes in the image then fill them with rectangles using greedy fill. Though I have not found an algorithm that will help me create the shapes from edge detection.
Cam anyone point me in the right direction?

Related

Is it possible to process semantic segmentation with a masked image in Tensorflow Lite?

I'm working on an AndroidApp with TensorFlow Lite to process semantic segmentation.
https://www.tensorflow.org/lite/api_docs/java/org/tensorflow/lite/InterpreterApi
Also I'm using U2net model and processing images which are already masked like below.
enter image description here
Problem is masked area could be circle or elliptical shape, so it can't be done with normal process.
Throwing this image into interpreter results in getting just same image in which circle area is white color.
So I have to cut out the circle and throw it, but only the circle area is included.
Throwing cut-out image does't work, because it's out of shape.
The arrays must be square just like normal images.
I counld't find any lead to solve this problem.
Any advice would be appriciated.
Thank you.

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

Affine Transform in PDF

I seem to be missing something related to how Affine Transform works in PDF. I have a requirement to do below -
A shape whose bottom-left edge located in origin
Rotate the shape to 90 deg Counter-Clockwise
Translate the shape to the destination
Now, when i apply affine transform, i am not getting the proper placement of the shape in the destination. After several experiments, i found that PDF Engines apply rotation AROUND the bottom-left corner of the shape as reference, but most literature guides rotation about the center axis, suggesting no difference would occur doing either way.
But i am not able to get correct arithmetic behind my transform ad i am unable to achieve the result. I am not very sound in Mathematics, hence, would appreciate any help in achieving this.
Prepare some illustrations to show how i am doing the transform.
My understanding of the math behind the transform may be wrong altogether and would appreciate if i can get some guidance. I am using PDFBox to achieve the same. Thanks in advance.

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

Reduce noise/improve shapes from an Image

With imagesegmentation I segmented objects from a webcam. But because of the bad light I get a lot of noise in the picture. I want now to improve the shape of the found objects. The only method i found is image opening and closing but the result is no as good as I wished. Does anyone know some other methods?
In this folder I have the orginal image after the segmentation, the image after closing, and a picture of what kind of picture I'm looing for.
Thanks in advance
You could try the opening-closing combination operator. This provides a better smoothing of the shapes. Further more varying the radii of opening and closing (called the ASF - alternating sequential filtering) produces smoother results.
There are a few demos online that use fourier desciptors. Its also important to choose a good shape of structuring elements, since your closing seems to close convex holes in the image.
If it is salt/pepper noise, a simple median filter may be better.