How to crop a UIImage by selecting an area - objective-c

I'm searching for a posibility to crop UIImages. I've found lots of examples via google how to do this, but I want to do a bit more than just croping the image.
It would be nice if the user can choose which area of the UIImage will be crop. In other languages, for example Javascript, there are many plugins to do that. I'm looking to find something like this:
http://odyniec.net/projects/imgareaselect/
Does anybody know if some similar project exists for objective-c? Thank you!

after days of searching I found out that there is no "plugin" which is similar to ImgAreaSelect. :-(
The best thing I found was this:
https://github.com/barrettj/BJImageCropper
It wasn't very hard to adapt this project to my needs: now I can choose a proportional area with min size. :-)

You could always make a black&white mask of this area and then mask the image (white area on mask will result in transparent area on masked image).
Link to nice tutorial on how to mask an image.
As an addition you could later calculate the miminum square frame that holds complete masked (cropped) image and crop the result - to get rid of excess transparent areas.

Related

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

Is there a way to take an image file and make its background transparent via VB .NET?

We have a system where people are being taken a face shot via a DSLR camera. We need the people's images with transparent background. What we're currently doing is taking the image and editing and cropping it in Photoshop, removing the background image with the Magic Eraser tool.
What I am looking for is a way to parse the image and automatically erase the semi-white background we have, along with the resizing and cropping. Is there some kind of library or code sample that does this without requiring manual intervention?
This is a real complex problem. Like the answer below suggested you'll need to do a fuzzy match on each pixel and set it to be transparent but you also need to detected other nearby pixels to make sure they are not close in color. A white tag on the shirt, white eyelids, hair, pale skin reflecting the flash. All are candidates to be removed by any greedy fuzzy logic.
Think about the Magic Wand tool in Photoshop. How good is it at detecting the edges of the person in the picture? Yeah, and that's the top standard of image editing software with thousands of engineering hours behind it.
This is not a feasible request for a Q&A format, and this is one of those things that humans just do better than machine. BUT, that doesn't mean it's not possible, and who knows, you might be the one to do it. Just don't do it in VB.NET please :)
Some pseudo-code to get an idea of what you need to do:
Bitmap faceShot = Bitmap.FromFile(filepath)
foreach pixel in faceShot
//the following line is where the magic happens, you can do any fuzzy match on the color that suits you
//figure out your color range and do a fuzzy match percentage wise
if (pixel between RGB(255,255,255) and RGB(250,235,215)) //white and antique white
pixel.setAlpha=0
endif
end foreach
You could start with this as a starting point for processing a single image,
http://www.java2s.com/Code/VB/2D/ProcessanImageinvertPixel.htm
Basically, if you have a constant background color (like the TV green-screen), it's just a matter of selecting pixels close to the color you are erasing and setting their Alpha level to 0 (transparent). Treating the RGB values like XYZ coordinates, you can do a 3d distance from your background color, and make everything within a certain threshold transparent.
As an improvement, you could also make everything within another threshold semi-transparent so the edges right around hair and stuff like that look softer and less harsh.
Alternatively, you could probably do the same exact thing with good results in Photoshop, as it should support batch processing.
Edit, thinking about it some more, you may want to use a green screen type background as well instead of an off-white one like you stated, as you may make people's eyes transparent. I would definitely try to batch it in Photoshop/Gimp/etc.

Detecting what percent of an drawn image has been "erased"

Let's say I have a solid, irregularly shaped (but enclosed) shape on screen in iOS (one colour). I then want to "erase" portions of that shape by dragging my finger around like you would in a typical kids colouring app, erasing with a fixed brush size where I touch the screen.
I could easily accomplish all this with something like an image mask and touch detection however, as a requirement, I also need to determine the rough percentage of the shape that remains.
For example I need to know when 50% of the random enclosed shape has been "erased".
What's the best way of approaching this problem? Are there any existing iOS compatible libraries that can handle it? I'm thinking that I would need to keep track of a ton of polygons and calculate all the overlaps but it seems like there must be a solution to this problem.
EDIT: I have done research into this problem however tracking all the polygons manually and calculating all their positions and area overlaps seems overly complicated. I was simply wondering if anyone else has run into a similar issue and found a better solution.
you will need to first know the fixed space of the image view. then you will need to know the percentage of blank space when the new image is loaded. pixel
double percentageFilledIn = ((double)nonBlankPixelCount/totalpixels);
After you get that value you will need to use that percentage as your baseline for the existing percentage
your new calculation will look like this.
double percentageOfImageLeft = ((double)nonBlankPixelCount/totalpixels/percentageFilledIn);
this calculation will likely be processor intensive. I would only calculate sparingly.
Since this post is not about code and more about login I will let you determine your logic for detecting non blank pixels.
here is how to find a pixel color.
How to get Coordinates and PixelColor of TouchPoint in iOS/ObjectiveC
Good luck.

How to add a shadow to an UIImageView which fits the shape of the image content but with some rotation and shift effect

I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.

OpenCV cvFindContours questions

I'm following this guide: http://www.aishack.in/2010/08/sudoku-grabber-with-opencv/2/
and modifying to iOS 5.0.
I managed to find the largest contour (the sudoku "board"), however, it only locates the surrounding square, without the lines inside, as in the tutorial. can this be easily solved?
I'll try and find a way around it, but still would like to know. thanks!
I assume that you are using the camera right now. Try to load the image what is used in the tutorial and check if your implementation works on that image. Then you can continue with nice sudoku images from the web. You may also change the camera angle and distance from the sudoku when taking pictures to get a better, clearer view.