I created a custom Camera Control with overlay ...
Now i am zooming image by using Slider by ..
imagePickerController.cameraViewTransform = CGAffineTransformScale(initialTransform,
MainSlider.value, MainSlider.value);
where initialTransform is the initial image transformation ...
MainSlider is slider , which i use to get zoom level from 1 to 4
So each time when i increase image using Slider ... I refer initialTransform and zoom accordingly Slider value ...
I am able to zoom by doing this .. But when i capture photo using
[imagePickerController takePicture];
It gives me Original Picture only .. does not give me any EditedImage ...
This original image is same as without zoom ...
I want to get Image which was zoomed ...
Means whatever showing in screen ...
I try to find a lot for this,,,, I know we can use GetScreenCapture()
but it can be cause of rejection of app and also it lower down the image quality ...
You need to apply the same transform to the image after it has been captured. Setting cameraViewTransform will only affect the display, as you've noticed. When you apply a uniform scaling transform (e.g. to zoom) this is a digital zoom. You are not increasing the pixel resolution. You'll get the pic back from the camera and then you can crop/scale it to the size you want when processing the image. You should do your processing on a background thread to minimize disrupting the main thread.
Related
I have a situation were I'm trying to draw an image into a display's CGContext retrieved using CGDisplayGetDrawingContext. Despite having the image at the correct high resolution, using CGContextDrawImage to draw the image onto the context results in a pixilated image. I've also tried scaling down the image in a bitmap context (using CGBitmapContextCreate) then drawing that one onto the display's context, however that also results in a pixilated image (I thought it may retain the DIP, was a long shot). Any idea how to fix this?.
I am using CIDetector to find faces in a picture.
The coordinates of faces it returns are the absolute coordinates in the image file (The image dimensions are much larger than the screen size obviously).
I tried to use the converRect:toView command. The image itself is not a UIView so the command doesn't work, also I have a few views embedded inside each other where finally the image is being shown.
I want to convert the bounds of the found faces in the image to the exact location of the face being shown on the screen in the embedded image.
How can this be accomplished?
Thanks!
The image being shown on the phone - the image is scaled to fit the screen with aspect fit
The coordinates from CIDetecter (CoreImage) are flipped relative to UIKit coordinates. There are a bunch of tutorials out there on iOS Face Detection but most of them are either incomplete or mess up the coordinates. Here's one that is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One thing to note: the tutorial uses a small image so the resulting coordinates do not have to be scaled to the on-screen (UIImageView) representation of the image. Assuming you use a photo taken with the iPad camera, you will have to scale the coordinates by the amount the source image is scaled (unless you reduce its size before running the face detection routine -maybe not a bad idea). You may also need to rotate the image for the correct orientation.
There is a routine in one of the answers here for rotating/scaling: UIImagePickerController camera preview is portrait in landscape app
And this answer has a good routine for finding the scale of an image when presented by a UIImageView using 'aspect fit': How to get the size of a scaled UIImage in UIImageView?
You will need to use the scale in order to map the CIDetector coordinates from the full size image to the scaled down image shown in a UIImageView.
I draw OpenGL 3200x2000 size textured quads. OpenGLView frame size is set to 940x560. It draws quad as it should. Bun when I try to save it as image (using glReadPixels) and set glReadPixels area from (0,0) to (3200,2000). It creates pixel data 3200x2000, but when I save it to file I see small image part (940x560 from bottom left corner) and whole other area is black. So how can I read offscreen area? I tried using Framebuffer, but its very complicated, errors while creating it and etc... Is there any other solution?
Situation visualization:
Original image looks like this (3200x2000):
OpenGLView looks like this (940x560):
Saved image looks like that (3200x2000):
So you're rendering to the window. Well, the window has a particular size. And nothing exists outside of that size.
This is part of something OpenGL calls the "pixel-ownership-test". If a pixel is not owned by the context, then its contents are undefined. Pixels outside of the window are not owned by the context, and therefore their contents are undefined.
This is one reason why framebuffer objects exist: so that you can render outside the size of your window. Though be advised: there is a maximum viewport size limit.
Alternatively, you can render in screen-sized pieces, where you download each piece after each rendering, then move the camera to render the next piece.
You haven't given much details in terms of code, or the platform.
But I think you should be using offscreen rendering, rather than just reading from the rendered window. If you are unfamiliar with using frame buffer objects, here is a minimal example:
https://github.com/datenwolf/codesamples/tree/master/samples/OpenGL/minimalfbo
Edit #1:
Since OP mentioned that the platform is OS X, I am posting my code below, which shows a minimal FBO example in iOS:
https://github.com/glman74/simpleFBO
I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.
I am writing a Cocoa application for mac osx. I'm trying to figure out how to determine the size of an image that will be captured by a camera? I would like to know the size of the image that will be captured so I can setup a view with an aspect ratio that won't distort the image. For example, if my view is defined to be 640x360 and my camera captures images that are 640x480, the displayed image looks short and fat. I'm also displaying some other layers over the image and I need the image size to be able to scale and position the layers properly.
I won't know the type of camera that is attached until run-time so I'd like to be able to interrogate the device and get attributes like image size. Thanks for the help...
You are altering the aspect ratio of the image when you capture in 640x360 instead of 640x480 or 320x240. You are doing something similar as a resize, using the whole image and making it a different size.
If you don't want to distort the image, but use only a portion of it you need to do a crop. Some hardware support cropping, others don't and you have to do it in software. Cropping is using only portions of the original image. In your case, you would discard the bottom 120 lines.
Example (from here):
The blue rectangle is the natural, or original image and the red is a crop of it.