how do we do stretch,pinch,swirl effect on uiimage using core graphics..?
something like this .this screenshot was of application named PhotoTwist
alt text http://www.appvee.com/uploads/1220808851-PhotoTwist%202.PNG
CoreGraphics doesn't pinch or swirl. The most you can do in CoreGraphics is scale horizontally and vertically.
CoreImage on the Mac can handle these effects but CoreImage is not available on the iPhone.
To create effects like the one shown, you would need to get the raw pixel data (use CGImageGetDecode on the CGImage) and apply the effect manually.
More likely, this app applies the image to an OpenGL surface and distorts the surface.
Neither approach would be easy and I have no further information on how you'd do it.
Related
Thanks to Apple's PictureSwiper sample code and the very nice NSPageController tutorial from juniperi here on stackoverflow, it's pretty easy to get tantalizing close to image viewing capabilities in Preview. Specifically I want to replicate the ability to swipe forwards/backwards between images/pages, use pinch-to-zoom resize the images, gesture to rotate the images/pages, and support two-page mode.
But there are some hurdles that make me wonder if NSPageController is the right approach or if it is too limiting and a custom view controller is needed.
1) Images of varying sizes are simply displayed stacked and if the top/upper layer image is smaller, the underlying image(s) show through. Using the same images in preview, they hide the larger "underlying" images/pages and fade the underlying image in/out with the swipe transition. I could hide underlying images by linking the page controller to the view rather than the image cell (like PictureSwiper), but that causes the entire view to scale on pinch to zoom and overall looks clunky.
2) Is it possible to use NSPageController with more than one image cell, e.g. two-page mode?
3) Is page/image rotation possible with NSPageController?
4) Is it possible to lock the zoom level for all the images, so they are uniformly displayed as navigated?
My apologies if this too general of a question, but the gist is whether NSPageController too limited and problematic to extend which would necessitate building a custom controller from scratch.
Thanks.
I am developing a iOS7 video recording application. The camera screen in our application requires to show a blurred backgound similar to the one shown in iOS7 control Center. While the video preview is being shown, we need to show the blurred control center.
As suggested in WWDC video 226, I have used the code provided below to get camera preview snapshot and then applying blur and setting this blurred image to my view.
UIGraphicsBeginImageContextWithOptions(_camerapreview.frame.size, NULL, 0);
[_camerapreview drawViewHierarchyInRect:rect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lightImage = [newImage applyLightEffect]
Here _camerapreview is a UIView which contains AVCaptureVideoPreviewLayer. The *newImage obtained from the context is for some reason black.
However if i use [_camerapreview snapshotView] returns me a UIview with previewlayer content. But there is no way to apply blur on UIview.
How can I get a blurred image from the camera preview layer?
I would suggest that you put your overlay onto the video preview itself as a composited video layer, and add a blur filter to that. (Check the WWDC AVFoundation sessions and the AVSimpleEditoriOS sample code for compositing images onto video) That way you're staying on the GPU instead of doing readbacks from GPU->CPU, which is slow. Then drop your overlay's UI elements on top of the video preview within a clear background UIView.
That should provide greater performance. As good as Apple? Well, they are using some private stuff developers don't yet have access to...
Quoting Apple from this Technical Q&A:
A: Starting from iOS 7, the UIView class provides a method
-drawViewHierarchyInRect:afterScreenUpdates:, which lets you render a snapshot of the complete view hierarchy as visible onscreen into a
bitmap context. On iOS 6 and earlier, how to capture a view's drawing
contents depends on the underlying drawing technique. This new method
-drawViewHierarchyInRect:afterScreenUpdates: enables you to capture the contents of the receiver view and its subviews to an image
regardless of the drawing techniques (for example UIKit, Quartz,
OpenGL ES, SpriteKit, AV Foundation, etc) in which the views are
rendered
In my experience regarding AVFoundation is not like that, if you use that method on view that host a preview layer you will only obtain the content of the view without the image of the preview layer. Using the -snapshotViewAfterScreenUpdates: will return a UIView that host a special layer. If you try to make an image from that view you won't see nothing.
The only solution I know are AVCaptureVideoDataOutput and AVCaptureStillImageOutput. Each one has its own limit. The first one can't work simultaneously with a AVCaptureMovieFileOutput acquisition, the latter makes the shutter noice.
I'm having difficulties finding any documentation about cropping images using OpenGL ES on the iPhone or iPad.
Specifically, I am capturing video frames at a mildly rapid pace (20 FPS), and need something quick that will crop an image. Is it feasible to use OpenGL here? If so, will it perform faster than cropping using Core Image and its associated methods?
It seems that using Core Image methods, I can't achieve faster than about 10-12 FPS output, and I'm looking for a way to hit 20. Any suggestions or pointers to usage of OpenGL for this?
Obviously, using OpenGl ES will be faster than Core Image Framework. Cropping image will be done by set Texture Coordinate, in generally, Texture Coordinate always like this,
{
0.0f,1.0f,
1.0f,1.0f,
0.0f,0.0f,
1.0f.0.0f
}
The whole image will be drawed with Texture Coordinate above. If you just want upper right part of a image, you can set Texture Coordinate like this,
{
0.5f,1.0f,
1.0f,1.0f,
0.5f,0.5f,
1.0f.0.5f
}
This will get a quater of the whole image at upper right. You never forget that the Coordinate origin of OpenGl ES is at the lower left corner
I have a custom UIView (graphView) that draws a complex graphic in the drawRect based on iPad screen size 1024 x 768. I'd like to take this view and shrink it down for use on the iPhone. I'm hoping to use the same drawing code and shrink the view instead of recalculating my graphic or creating a bitmap cache. The view created on the fly with no Interface Builder.
What is the best approach to do the shrinking?
Should I put the view inside of a UIScrollView?
Thanks!
If possible, just change the current transform matrix before drawing, using something like CGContextScaleCTM. That'll scale all your measurements sent into Core Graphics prior to rasterisation.
If that isn't possible for whatever reason, you should consider still drawing at 1024x768 but applying a suitable transform to the UIView using CGAffineTransformMakeScale. That'll draw at the original pixel size then scale down as a raster operation on the output pixels so it'll be less efficient.
I'm making a game using UIView.
I use a large (8192x8192) UIView as the map area, (the game is birds-eye-view) with a UIImageView stretched across it displaying a grass texture.
This uses heaps of memory, doesn't run on older devices and nearly crashes Xcode whenever I try to edit it...
Is there an alternate method of creating a 8192x8192 map, but without being laggy?
If it's possible to tile your graphics, something involving CATiledLayer would probably be a good fit. CATiledLayer allows you to provide only the images that are necessary to display the currently viewable area of the view (just like Maps does).
Here is some example code for displaying a large PDF.