UIImageView half moon slice - objective-c

I'm trying to create an app with groups you can switch between. My idea was to pick the first 3 photo's of the members in the group, and lay the images over each other. Adding three images over each other is not really difficult, the difficult part for me is to make the other two images show up like a "half moon" beneath the other images. See the attached image for an example.

It isn't really a half moon. It's more like a crescent moon or lunate shape.
The principle is not a difficult one. Practice as follows:
Start with an image, roughly a square.
Make an image context the same size as the image.
Fill a circle the size of the image, roughly offset about a third of its width to the left.
Fill another circle the size of the image, roughly offset about two thirds of its width to the left, using Clear blend mode.
Extract the resulting image from the image context.
You now have the desired lunate shape:
Now use that lunate shape as a mask or clipping area for the original image:

Related

how to convert coordination of labels for yolo when cropping image?

i've created over 1200 images with labels for yolo detection and the problem is every image size is 800x600 and all the objects with labels are in the middle of the image. so i wanna crop the rest of the part since objects are placed in the middle.
so the size of images would be something like 400x300 (crop left, right, top, bottom equally) but the objects will still be in the middle. but how do you convert or change the coordinates other than labeling all over again?
# (used labelimg for yolo)
0 0.545000 0.722500 0.042500 0.091667
1 0.518750 0.762500 0.097500 0.271667
heres one of my label .txt. sorry for my bad english!

How do artists create non linear abstract interpolated gradients images

I've seen many versions of multicolored gradient like images, that are both non linear and heavily stylized. Usually in the form of layered blob like shapes.
My guess as to how they achieve this effect is
drawing intersecting blob like shapes
masking gradients on the shapes
interpolating the colors on the image.
However as you'll notice by the distinct lines in the image the interpolated effect only appears in certain regions of the image. This effect is what I would like to achieve in metal.
One approach is to draw your solid colors and then apply a zoom or motion blur CoreImage filter to achieve the effect of a gradient, leaving some detail by where you place the center (for zoom) or the angle you set (for motion).
Here's an example of a before and a couple afters. The original image in this case is drawn with 2D function plotting but you could easily use a static input image/video-frame, draw an image with filled bezier paths, etc.
The second image uses a CIZoomBlur, input center pt just off image center at (240, 220), with amount set to 134.9.
The CIMotionBlur filter also produces some interesting gradient effects. Here's the same input image, with CIMotionBlur inputRadius 57.6 and inputAngle -0.415.
I think this could achieve what you're after providing you set up the original solid-color image as you like and are able to figure out optimal settings for the filters (angle, center pt etc.).

How to make black and white dotted image in photoshop.?

I am new to photoshop. I want to make image same link below image. this image was colored before this result.
any one can tell me what should I do, for this.
As mentioned above, the image has been created in several steps, here's brief description and a screenshot of the steps
From the original - colorful or grayscale - image we have to separate the elements which we do not want halftoned - for me it's the white areas (step 1) and black borders (step 2). You may do it any way you want - Wand/magic lasso/Color selection/etc. Then, to quicker generate the halftone (note on converting to halftones below) we desaturate the image (step 3) and finally generate the halftone (step 4) via Filter / Pixelate / Color Halftone (set all the angles to 0). Later, we simply overlay the border and whites over the halftone and here we are.
Overlaying white areas may seem useless, yet in case of bigger halftone dot size it allows smoothing out the raster-jagged edges of these areas. It is not so well seen on my image, but in the one presented by you, the effect I mentioned is clearly seen in the case of the contours of ear/eye/hair (etc.).
Note: Keep in mind, that there's another way to create halftones which may be more useful for you:
Image / Mode / Grayscale
Image / Mode / Bitmap
The way I have described above is quicker, at least for the presentation purposes.
The image you reference isn't the result of a single filter. Hairs and outlines have likely been put in after the halftoning as well as eyes and ears.

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

Camera image size

I am writing a Cocoa application for mac osx. I'm trying to figure out how to determine the size of an image that will be captured by a camera? I would like to know the size of the image that will be captured so I can setup a view with an aspect ratio that won't distort the image. For example, if my view is defined to be 640x360 and my camera captures images that are 640x480, the displayed image looks short and fat. I'm also displaying some other layers over the image and I need the image size to be able to scale and position the layers properly.
I won't know the type of camera that is attached until run-time so I'd like to be able to interrogate the device and get attributes like image size. Thanks for the help...
You are altering the aspect ratio of the image when you capture in 640x360 instead of 640x480 or 320x240. You are doing something similar as a resize, using the whole image and making it a different size.
If you don't want to distort the image, but use only a portion of it you need to do a crop. Some hardware support cropping, others don't and you have to do it in software. Cropping is using only portions of the original image. In your case, you would discard the bottom 120 lines.
Example (from here):
The blue rectangle is the natural, or original image and the red is a crop of it.