renderInContext called on MPMoviePlayerController - objective-c

I am trying to programatically create an image from the current frame of a MPMoviePlayerController video. I am using renderInContext called upon the player's view.layer however the image that is created is all black with the video player controls, but I am expecting to see the current frame of the video
My code (http://pastie.org/1020066)
UIGraphicsBeginImageContext(moviePlayer.view.bounds.size);
[moviePlayer.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// I also tried this code (same results)
// moviePlayer is a subview of videoView (UIView object)
UIGraphicsBeginImageContext(videoView.bounds.size);
[videoView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

If you are trying to get a thumbnail of your movie, you should probably check the thumbnailImageAtTime:timeOption: method of MPMoviePlayerController.

Related

Render play image over existing image in objective c

I have collection view with some videos and images
Using AVFoundation able to capture video from iPhone and generated thumbnail using AVAssetImageGenerator. When shown in gallery image should distinguish it's of video thumbnail. So i need to transform exact image by drawing video symbol(like play icon) over it.
Is it possible?
You could use CoreGraphics to edit the image.
First, create a UIImage with the image you would like to edit. Then, do something like this:
UIImage *oldThumbnail; //set this to the original thumbnail image
UIGraphicsBeginImageContext(oldThumbnail.size);
[oldThumbnail drawInRect:CGRectMake(0, 0, oldThumbnail.size.width, oldThumbnail.size.height)];
/*Now there are two ways to draw the play symbol.
One would be to have a pre-rendered play symbol that you load into a UIImage and draw with drawInRect */
UIImage *playSymbol = [UIImage imageNamed:"PlaySymbol.png"];
CGRect playSymbolRect; //I'll let you figure out calculating where you should draw the play symbol
[playSymbol drawInRect: playSymbolRect];
//The other way would be to draw the play symbol directly using CoreGraphics calls. Start with this:
CGContextRef context = UIGraphicsGetCurrentContext();
//now use CoreGraphics calls. I won't go over it here, butthe second answer to this questionmay be helpful.
//Once you have finished drawing your image, you can put it in a UIImage.
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext(); //make sure you remember to do this :)
Now you can use the newly generated thumbnail in a UIImageView, cache it so you don't need to re-render it every time, etc.
This should work (you may have to play with positions and sizes):
-(UIImage*)drawPlayButton:(UIImage*)image
{
UIImage *playButton = [UIImage imageNamed:#"playbutton.png"];
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
[playButton drawInRect:CGRectMake(image.size.width/2-playButton.size.width/2, image.size.height/2-playButton.size.height/2, playButton.size.width, playButton.size.height)];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}

Get Image from image View

if we add more than one View to imageView and NSString also. Is it possible to get image from ImageView that contain all images and strings etc?
like this
From what I understand in your question, you have added an ImageView for the building, another for the video symbol (as a subview) and a UILabel (again as a subview) for the time. You want all these to be rendered as a single UIImage. For this, you can use:
UIGraphicsBeginImageContext(imageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[imageView.layer renderInContext:context];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The screenshot UIImage will contain your desired image.

iOS 7 blur effect on videoPlayer

I want to make blur effect on video player.
So I play video using AVPlayer and whenever I want to share the video to social, share window display on video player. just I want to apply blur effect to share window's background.
renderContext function doesn't render AVPlayer's layer. But I saw that apple's new API drawViewHierarchyInRect will render specific layers such as video player or OpenGL layer.
So I used drawViewHierarchyInRect and it works as well on simulator but not on device.
Any idea?
- (UIImage *)snapshotOfVideoPlayer
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 1.0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I believe the only way is using the AVAssetImageGenerator.
Assuming you have a reference to your AVPlayerItem:
AVURLAsset *asset = (AVURLAsset *)self.playerItem.asset;
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
CGImageRef thumb = [imageGenerator copyCGImageAtTime:self.playerItem.currentTime
actualTime:NULL
error:NULL];
self.videoScreenshotIV.image = [UIImage imageWithCGImage:thumb];
Notice the self.playerItem.currentTime. This will ensure the image will be exactly the same as the moment of the screenshot.
The videoScreenshotIV is an UIImageView (contentMode scaleAspectFit) that is directly over the avplayer view with exactly the same bounds. I hide this UIImageView until I need to take a screenshot, I then first unhide it and set the image, then take the screenshot and then hide it again. It works perfectly! :)

Programmatically email an image with some subviews but not others?

I'm working on something that will create an email and attach an image of the screen for the user to send. I'm using the following code to create and attach the image.
UIGraphicsBeginImageContext([self.view frame].size);
[[self.view layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext([self.view bounds].size);
[myImage drawInRect:CGRectMake(0, 0, [self.view bounds].size.width,[self.view bounds].size.height)];
myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(myImage);
[mailer addAttachmentData:imageData mimeType:#"image/jpg" fileName:#"blah"];
However, this includes all of the subviews, and I want to exclude a toolbar and a segmented view, leaving only the view above the toolbar and the text field in that view. I've tagged all the relevant views and subviews, but how do I use those tags to create an image that includes what I want and excludes what I don't want?
Before you render the image, set all the views you don't want to show to hidden.
I have several apps that do this exact same thing, and that's what I do.
renderInContext and related functions basically take a screenshot, so WYSIWYG

Convert image with overlay view into UIImage

In my app I have an Image which is shown to user and the overlay view with custom drawing by CoreGraphics. I want to save the image with overlay over it into CameraRoll.
So how should I make a new UIImage from existing UIImage and custom overlay view?
I tried to do it by creating the CGContextRef, drawing the UIImage into it, and then drawing the CALayer of the overlay view into the same context. But the UIImage which I get from this context is then just the overlay view over white background. It does't seem to preserve transparency and just fills everything with white color.
How can it be fixed?
Thank you.
If I understand your question right. You have two views
An Image which is shown to user (i.e. viewA)
The overlay view with custom drawing (i.e. viewB)
If you create another view (i.e. viewC) addSubview viewA and viewB both on viewC.
[viewC addSubview:viewA]
[viewC addSubview:viewB]
Now try CGContextRef on viewC
UIImage *outputImage = [self convertViewToImage:viewC :1];
convertViewToImage method:
+(UIImage*)convertViewToImage:(UIView*)view:(float)ratio{
UIGraphicsBeginImageContext(view.bounds.size);
//UIGraphicsBeginImageContext(view.frame.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = CGImageCreateWithImageInRect([viewImage CGImage], CGRectMake(0, 0, view.bounds.size.width*ratio, view.bounds.size.height*ratio));
UIImage *targetImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return targetImage;
}
I believe it will do it.