I am developing a iOS7 video recording application. The camera screen in our application requires to show a blurred backgound similar to the one shown in iOS7 control Center. While the video preview is being shown, we need to show the blurred control center.
As suggested in WWDC video 226, I have used the code provided below to get camera preview snapshot and then applying blur and setting this blurred image to my view.
UIGraphicsBeginImageContextWithOptions(_camerapreview.frame.size, NULL, 0);
[_camerapreview drawViewHierarchyInRect:rect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lightImage = [newImage applyLightEffect]
Here _camerapreview is a UIView which contains AVCaptureVideoPreviewLayer. The *newImage obtained from the context is for some reason black.
However if i use [_camerapreview snapshotView] returns me a UIview with previewlayer content. But there is no way to apply blur on UIview.
How can I get a blurred image from the camera preview layer?
I would suggest that you put your overlay onto the video preview itself as a composited video layer, and add a blur filter to that. (Check the WWDC AVFoundation sessions and the AVSimpleEditoriOS sample code for compositing images onto video) That way you're staying on the GPU instead of doing readbacks from GPU->CPU, which is slow. Then drop your overlay's UI elements on top of the video preview within a clear background UIView.
That should provide greater performance. As good as Apple? Well, they are using some private stuff developers don't yet have access to...
Quoting Apple from this Technical Q&A:
A: Starting from iOS 7, the UIView class provides a method
-drawViewHierarchyInRect:afterScreenUpdates:, which lets you render a snapshot of the complete view hierarchy as visible onscreen into a
bitmap context. On iOS 6 and earlier, how to capture a view's drawing
contents depends on the underlying drawing technique. This new method
-drawViewHierarchyInRect:afterScreenUpdates: enables you to capture the contents of the receiver view and its subviews to an image
regardless of the drawing techniques (for example UIKit, Quartz,
OpenGL ES, SpriteKit, AV Foundation, etc) in which the views are
rendered
In my experience regarding AVFoundation is not like that, if you use that method on view that host a preview layer you will only obtain the content of the view without the image of the preview layer. Using the -snapshotViewAfterScreenUpdates: will return a UIView that host a special layer. If you try to make an image from that view you won't see nothing.
The only solution I know are AVCaptureVideoDataOutput and AVCaptureStillImageOutput. Each one has its own limit. The first one can't work simultaneously with a AVCaptureMovieFileOutput acquisition, the latter makes the shutter noice.
Related
Thanks to Apple's PictureSwiper sample code and the very nice NSPageController tutorial from juniperi here on stackoverflow, it's pretty easy to get tantalizing close to image viewing capabilities in Preview. Specifically I want to replicate the ability to swipe forwards/backwards between images/pages, use pinch-to-zoom resize the images, gesture to rotate the images/pages, and support two-page mode.
But there are some hurdles that make me wonder if NSPageController is the right approach or if it is too limiting and a custom view controller is needed.
1) Images of varying sizes are simply displayed stacked and if the top/upper layer image is smaller, the underlying image(s) show through. Using the same images in preview, they hide the larger "underlying" images/pages and fade the underlying image in/out with the swipe transition. I could hide underlying images by linking the page controller to the view rather than the image cell (like PictureSwiper), but that causes the entire view to scale on pinch to zoom and overall looks clunky.
2) Is it possible to use NSPageController with more than one image cell, e.g. two-page mode?
3) Is page/image rotation possible with NSPageController?
4) Is it possible to lock the zoom level for all the images, so they are uniformly displayed as navigated?
My apologies if this too general of a question, but the gist is whether NSPageController too limited and problematic to extend which would necessitate building a custom controller from scratch.
Thanks.
I am developing an Augmented Reality app using the Vuforia SDK. I am trying to use AVCaptureVideoPreviewLayer and SceneKit for application rendering instead of raw OpenGL calls provided by Vuforia sample code.
I got the AVCaptureVideoPreviewLayer and SceneView working without Vuforia, i.e. I managed to draw 3D scene on top of camera video background. The code is at: https://github.com/lge88/scenekit-test0/blob/master/scenekit-test0/GameViewController.swift#L74-L85:
func initViews() {
let rootView = self.view
let scnView = createSceneView()
let scene = createScene()
scnView.scene = scene
let videoView = createVideoView()
rootView.addSubview(videoView)
rootView.addSubview(scnView)
}
The implementation can be summarized as:
Create a UIView called videoView.
Initialize an AVCaptureVideoPreviewLayer, and add it as a sublayer of videoView.
Create a SCNView called scnView and initialize the scene o scnView.
Add both videoView and scnView to the root UIView.
Currently I am trying to integrate Augmented Reality feature, GameViewController.swift#L68-L71:
initViews()
animateScene()
initControls()
ARServer(size:viewFrame.size, done: initARCallback)
ARServer is a class that takes care of the Vuforia initialization, its implementation is taken from Vuforia ImageTargets sample code. The tracker is working, it can successfully track the targets of the sample dataset.
However the AVCaptureVideoPreviewLayer rendering doesn't work correctly, the area of the video rendering area is resized, and the video layer is not updating, it shows a static image captured when the tracker camera started. Here is how it looks from a ipad screenshot: https://github.com/lge88/scenekit-test0/blob/master/video-preview-layer-issue.png
This strategy could get ugly on you really fast. Better would be to render everything into one view with one OpenGL context. If Vuforia wants to do its own GL stuff, it can share that view/context, too.
Look at Apple's GLCameraRipple sample code for getting live camera imagery into GL, and SCNRenderer for making SceneKit render its content into an arbitrary OpenGL (ES) context.
Alternatively, if you just want to get camera imagery into a SceneKit view, remember you can assign any Core Animation layer to the contents of a material — this should work for AVCaptureVideoPreviewLayer, too.
I'm building an app where I need to take a screenshot of a view whose subviews are camera sessions (AVFoundation sessions). I've tried this code:
CGRect rect = [self.containerView bounds];
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.containerView.layer renderInContext:context];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Which effectively gets me an UIImage with the views, only that the camera sessions are black:
I've tried the private method UIGetScreenImage() and works perfectly, but as Apple doesn't allows this, I can't use it. I've also tried the one in Apple's docs but it's the same. I've tracked the problem to AVFoundation sessions using layers. How can I achieve this? The app has a container view with two views which are stopped camera sessions.
If using iOS 7, it's fairly simple and you could do something like this from a UIViewController:
UIView *snapshotView = [self.view snapshotViewAfterScreenUpdates:YES];
You can also use this link from a widow: iOS: what's the fastest, most performant way to make a screenshot programatically?
For iOS 6 and earlier, I could only find the following Apple Technical Q&A: [How do I take a screenshot of my app that contains both UIKit and Camera elements?]
Capture the contents of your camera view.
Draw that captured camera content yourself into the graphics context that you are rendering your UIKit elements. (Similar to what you did in your code)
I too am currently looking for a solution to this problem!
I am currently out at the moment so I can't test what I have found, but take a look at these links:
Screenshots-A Legal Way To Get Screenshots
seems like its on the right track - here is the
Example Project (and here is the initial post)
When I manage to get it to work I will definitely update this answer!
One of the ways to improve user experience in iOS while showing images is to download them asynchronously without blocking the main thread and showing them....
But I want to add something to this -
Initially when there is no image,show a spinner while the async download has started.
After the download cache the image on local iOS disk for later use.
After the download populate the image part of UIImageView.
And dont just plonk the image into view for user. Showly Fade in the user (i.e. from alpha 0.0 to 1.0)
I have been using SDWebImage for sometime now. It works well but does not satisfy my 1st requirement (about spinner) and 4th.
Is there any help out there to satisfy all this?
Three20 http://www.three20.info has a TTImageView class that statisfies 2-3, you can subclass it and overwrite setImage: and create the fade animation there. (or just modify TTImageView.m directly).
Spinner is easy as well when you modify TTImageView you can add a TTActivityView on top and remove it on setImage:
how do we do stretch,pinch,swirl effect on uiimage using core graphics..?
something like this .this screenshot was of application named PhotoTwist
alt text http://www.appvee.com/uploads/1220808851-PhotoTwist%202.PNG
CoreGraphics doesn't pinch or swirl. The most you can do in CoreGraphics is scale horizontally and vertically.
CoreImage on the Mac can handle these effects but CoreImage is not available on the iPhone.
To create effects like the one shown, you would need to get the raw pixel data (use CGImageGetDecode on the CGImage) and apply the effect manually.
More likely, this app applies the image to an OpenGL surface and distorts the surface.
Neither approach would be easy and I have no further information on how you'd do it.