I have a video saved in an AVAsset. Is there a simply way for me to add an overlay to the video so that I can have a watermark in the corner of the screen?
I am already adding it to an AVMutableCompositionTrack then creating an AVAssetExportSession. Or is it impossible and do I need to create an instance of AVMutableVideoComposition as well and how do I so?
Is there a way for me to convert my AVAsset to an AVMutableVideoComposition and back?
This library has your request function. You can find your answer by reading source code.
Hope this help ;)
Please try this code to add overlay on video:
CALayer *overlayLayer = [CALayer layer];
UIImage *overlayImage = [UIImage imageNamed:#"overlay.png"];
[overlayLayer setContents:(id)[overlayImage CGImage]];
overlayLayer.frame = CGRectMake(0, 0, size.width, size.height);
[overlayLayer setMasksToBounds:YES];
//Set Up the Parent Layer
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, size.width, size.height);
videoLayer.frame = CGRectMake(0, 0, size.width, size.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:overlayLayer];
Please check following link also Its may be helpful to you
http://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos
Related
I need to get the screenshot of a video on click of a button.Below is the code used for that,but the image shows only the bottom part of the video that is the play and progress with the time passed..I have used MPMoviePlayerController for the video to play.
CGRect rect = [_moviePlayer.view bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[_moviePlayer.view.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *imgView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 100, 100, 100)];
[imgView setImage:image];
imgView.layer.borderWidth = 2.0;
[_firstimage addSubview:imgView];
NSURL *url=[NSURL URLWithString:#"http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8"];
this is how it shows..How can i get the video image? Can anybody help me..
How about using [MPMoviePlayerController requestThumbnailImagesAtTimes: timeOption:]?
CGImageRef originalImage = UIGetScreenImage();
CGImageRef videoImage = CGImageCreateWithImageInRect(originalImage, CGRectMake(0, 0, 680, 300));
UIImage *snapShotImage = [UIImage imageWithCGImage:videoImage];
UIImageWriteToSavedPhotosAlbum(snapShotImage, nil, nil, nil);
CGImageRelease(originalImage);
CGImageRelease(videoImage);
UIImageView *imgView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 100, 250, 200)];
[imgView setImage:snapShotImage];
imgView.layer.borderWidth = 2.0;
[_firstimage addSubview:imgView];
this code works...thanks for the help...
I'm trying to add an overlay to my video which contains a sublayer that has the content changed over time. So I'm adding CAKeyframeAnimation to the layer as follows:
CALayer *overlayLayer = [CALayer layer];
[overlayLayer setFrame:CGRectMake(0, 0, size.width, size.height)];
[overlayLayer setMasksToBounds:YES];
NSArray *frames = [self.gifImage frames];
CALayer *aLayer = [CALayer layer];
UIImage *firstImage = [frames objectAtIndex:0];
CGFloat nativeWidth = CGImageGetWidth(firstImage.CGImage);
CGFloat nativeHeight = CGImageGetHeight(firstImage.CGImage);
CGRect frame = CGRectMake(0.0, 0.0, nativeWidth, nativeHeight);
aLayer.frame = frame;
aLayer.contents = (id)firstImage.CGImage;
[aLayer setMasksToBounds:YES];
CAKeyframeAnimation *animationSequence = [CAKeyframeAnimation animationWithKeyPath:#"contents"];
animationSequence.calculationMode = kCAAnimationDiscrete;
animationSequence.duration = [self.gifImage totalDuration];
animationSequence.repeatCount = HUGE_VALF;
NSMutableArray *animationSequenceArray = [[NSMutableArray alloc] init];
for (UIImage *image in frames) {
[animationSequenceArray addObject:(id)image.CGImage];
}
animationSequence.values = animationSequenceArray;
[aLayer addAnimation:animationSequence forKey:#"contents"];
[overlayLayer addSublayer:aLayer];
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, size.width, size.height);
videoLayer.frame = CGRectMake(0, 0, size.width, size.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:overlayLayer];
composition.animationTool = [AVVideoCompositionCoreAnimationTool
videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
Somehow when I add aLayer as a sublayer of my view, it animates properly. But When I add it to the overlay of the video, the animation doesn't work. Is there something I'm missing here? Any help is appreciated.
If you use kCAAnimationDiscrete, here's a checklist:
keyTimes needs a value from 0-1, not a time
keyTimes must start at 0.0 and end with value 1.0
keyTimes must have one more value than your animation frames count
I've finally managed to fix this issue by changing calculationMode to kCAAnimationLinear. This seems to be a silly issue, but I hope it may be useful for someone.
I would like to synchronize the movement of an SKSpriteNode with a video. So as the video seeks forward, seeks backward or just plays the SKSpriteNode moves accordingly.
I know this can be done easily enough with CALayers, but can it be done with SKSpriteNode(s) in place of CALayers?
// 1) Create a sync layer
AVPlayerItem * item = player.currentItem;
AVSynchronizedLayer* syncLayer = [AVSynchronizedLayer synchronizedLayerWithPlayerItem:item];
syncLayer.frame = CGRectMake(10, 220, 300, 10);
syncLayer.backgroundColor = [[UIColor lightGrayColor] CGColor];
[self.view.layer addSublayer:syncLayer];
// 2) Create a sublayer for the synclayer
CALayer *sublayer = [CALayer layer];
sublayer.frame = CGRectMake(0, 0, 200, 50); //our black box
sublayer.backgroundColor = [[UIColor blackColor] CGColor];
[syncLayer addSublayer:sublayer];
// 3) animate the layer!
CABasicAnimation *anim = [CABasicAnimation animationWithKeyPath:#"position"];
anim.fromValue = [NSValue valueWithCGPoint:CGPointMake(924, 300)];
anim.toValue = [NSValue valueWithCGPoint:CGPointMake(100, 300)];
anim.removedOnCompletion = NO;
anim.beginTime = AVCoreAnimationBeginTimeAtZero;
anim.duration = CMTimeGetSeconds(item.asset.duration);
NSLog(#"%f",anim.duration);
[sublayer addAnimation:anim forKey:nil];
I have this code and it's not adding anything to the view, can you help me figure out what is wrong?
also I will apply a filer to image, but that will be later, when I get this done, but tell me if this is a wrong approach if Im going to add filters.
UIGraphicsPushContext(context);
CALayer *layer = [CALayer layer];
CGColorRef bgColor = [UIColor colorWithHue:0.6 saturation:1.0 brightness:1.0 alpha:1.0].CGColor;
layer.frame = CGRectMake(0, 0, 200, 200);
CGContextSetFillColorWithColor(context, bgColor);
CGContextFillRect(context, layer.bounds);
CGContextSaveGState(context);
CGColorSpaceRef patternSpace = CGColorSpaceCreatePattern(NULL);
CGContextSetFillColorSpace(context, patternSpace);
CGColorSpaceRelease(patternSpace);
CGContextFillRect(context, layer.bounds);
CGContextRestoreGState(context);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *borrar = [[UIImageView alloc]initWithImage:image];
[self.view addSubview:borrar];
You never open an image context. Use UIGraphicsBeginImageContextWithOptions.
I may point out that Drawing and Rendering in Objective-C is my weakness. Now, here's my problem.
I want to add a 'Day/Night' feature to my game. It has got lots of objects on a map. Every object is a UIView containing some data in variables and some UIImageViews: the sprite, and some of the objects have a hidden ring (used to show selection).
I want to be able to darken the content of the UIView, but I can't figure out how. The sprite is a PNG with transparency. I've just managed to add a black rectangle behind the sprite using this:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextSetRGBFillColor(ctx, 0, 0, 0, 0.5);
CGContextFillRect(ctx, rect);
CGContextRestoreGState(ctx);
As I've read, this should be done in the drawRect method. Help please!
If you want to understand better my scenario, the App where I'm trying to do this is called 'Kipos', at the App Store.
Floris497's approach is a good strategy for a blanket darkening for more than one image at a time (probably more what you're after in this case). But here's a general purpose method to generate darker UIImages (while respecting alpha pixels):
+ (UIImage *)darkenImage:(UIImage *)image toLevel:(CGFloat)level
{
// Create a temporary view to act as a darkening layer
CGRect frame = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIView *tempView = [[UIView alloc] initWithFrame:frame];
tempView.backgroundColor = [UIColor blackColor];
tempView.alpha = level;
// Draw the image into a new graphics context
UIGraphicsBeginImageContext(frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawInRect:frame];
// Flip the context vertically so we can draw the dark layer via a mask that
// aligns with the image's alpha pixels (Quartz uses flipped coordinates)
CGContextTranslateCTM(context, 0, frame.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, frame, image.CGImage);
[tempView.layer renderInContext:context];
// Produce a new image from this context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *toReturn = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIGraphicsEndImageContext();
[tempView release];
return toReturn;
}
The best way would be to add a core image filter to the layer that darkened it. You could use CIExposureAdjust.
CIFilter *filter = [CIFilter filterWithName:#"CIExposureAdjust"];
[filter setDefaults];
[filter setValue:[NSNumber numberWithFloat:-2.0] forKey:#"inputEV"];
view.layer.filters = [NSArray arrayWithObject:filter];
Here is how to do it:
// inputEV controlls the exposure, the lower the darker (e.g "-1" -> dark)
-(UIImage*)adjustImage:(UIImage*)image exposure:(float)inputEV
{
CIImage *inputImage = [[CIImage alloc] initWithCGImage:[image CGImage]];
UIImageOrientation originalOrientation = image.imageOrientation;
CIFilter* adjustmentFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[adjustmentFilter setDefaults];
[adjustmentFilter setValue:inputImage forKey:#"inputImage"];
[adjustmentFilter setValue:[NSNumber numberWithFloat:-1.0] forKey:#"inputEV"];
CIImage *outputImage = [adjustmentFilter valueForKey:#"outputImage"];
CIContext* context = [CIContext contextWithOptions:nil];
CGImageRef imgRef = [context createCGImage:outputImage fromRect:outputImage.extent] ;
UIImage* img = [[UIImage alloc] initWithCGImage:imgRef scale:1.0 orientation:originalOrientation];
CGImageRelease(imgRef);
return img;
}
Remember to import:
#import <QuartzCore/Quartzcore.h>
And add CoreGraphics and CoreImage frameworks to your project.
Tested on iPhone 3GS with iOS 5.1
CIFilter is available starting from iOS 5.0.
draw a UIView (a black one) over it and set "User interaction enabled" to NO
hope you can do something with this.
then use this to make it dark
[UIView animateWithDuration:2
animations:^{nightView.alpha = 0.4;}
completion:^(BOOL finished){ NSLog(#"done making it dark"); ]; }];
to make it light
[UIView animateWithDuration:2
animations:^{nightView.alpha = 0.0;}
completion:^(BOOL finished){ NSLog(#"done making it light again"); ]; }];