objective c - AvAssetReader and Writer to overlay video - objective-c

I am trying to overlay a recorded video with AvAssetReader and AvAssetWriter with some images. Following this tutorial, I am able to copy a video (and audio) into a new file. Now my objective is to overlay some of the initial video frames with some images with this code:
while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]}];
UIFont *font = [UIFont fontWithName:#"Helvetica" size:40];
NSDictionary *attributes = #{NSFontAttributeName:font, NSForegroundColorAttributeName:[UIColor lightTextColor]};
UIImage *img = [self imageFromText:#"test" :attributes];
CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage];
[ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (sampleBuffer != NULL)
{
BOOL success = [assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
And to create image from text:
-(UIImage *)imageFromText:(NSString *)text :(NSDictionary *)attributes{
CGSize size = [text sizeWithAttributes:attributes];
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
[text drawAtPoint:CGPointMake(0.0, 0.0) withAttributes:attributes];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
The video and audio are copied, but I haven't any text on my video.
Question 1: Why this code is not working?
Moreover, I want to be able to check the timecode of the current read frame. For example I would like to insert a text with the current timecode in the video.
I try this code following this tutorial:
AVAsset *localAsset = [AVAsset assetWithURL:mURL];
NSError *localError;
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:localAsset error:&localError];
BOOL success = (assetReader != nil);
// Create asset reader output for the first timecode track of the asset
if (success) {
AVAssetTrack *timecodeTrack = nil;
// Grab first timecode track, if the asset has them
NSArray *timecodeTracks = [localAsset tracksWithMediaType:AVMediaTypeTimecode];
if ([timecodeTracks count] > 0)
timecodeTrack = [timecodeTracks objectAtIndex:0];
if (timecodeTrack) {
AVAssetReaderTrackOutput *timecodeOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:timecodeTrack outputSettings:nil];
[assetReader addOutput:timecodeOutput];
} else {
NSLog(#"%# has no timecode tracks", localAsset);
}
}
But I get the log:
[...] has no timecode tracks
Question 2: Why my video hasn't any AVMediaTypeTimecode? Ad so how can I get the current frame timecode?
Thanks for your help

I found the solutions:
To overlay video frames, you need to fix the decompression settings:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* decompressionVideoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
To get the frame timestamp, you have to read the video informations and then use a counter to increment the current timestamp:
durationSeconds = CMTimeGetSeconds(asset.duration);
timePerFrame = 1.0 / (Float64)assetVideoTrack.nominalFrameRate;
totalFrames = durationSeconds * assetVideoTrack.nominalFrameRate;
Then in this loop
while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
You can found the timestamp:
CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL){
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer) {
Float64 secondsIn = ((float)counter/totalFrames)*durationSeconds;
CMTime imageTimeEstimate = CMTimeMakeWithSeconds(secondsIn, 600);
mergeTime = CMTimeGetSeconds(imageTimeEstimate);
counter++;
}
}
I hope it could help!

Related

How to Take screen of desktop wallpaper without icons and windows for multiple screens?

In my cocoa application i want to take the screenshot of desktop wallpaper for Multiple screens i've found this link with this i made for single screen However, i am finding difficult for multiple screens. So how would i do it for multiple screens?
here is my code
- (NSImage*) desktopAsImageWithScreen:(NSScreen *)screen {
NSDictionary *deviceDescription = [screen deviceDescription];
CFMutableArrayRef windowIDs = CFArrayCreateMutable(NULL, 0, NULL);
CFArrayRef allWindowIDs = CGWindowListCreate(kCGWindowListOptionOnScreenBelowWindow, kCGDesktopIconWindowLevel);
NSLog(#"min window level: %d", kCGMinimumWindowLevel);
NSLog( #"looking for windows on level: %d", kCGDesktopWindowLevel );
NSLog( #"NOt actually looking for windows on level: %d", kCGMinimumWindowLevel );
NSLog( #"NOt actually looking for windows on the dock level: %d", kCGDockWindowLevel );
if (allWindowIDs)
{
CFArrayRef windowDescs = CGWindowListCreateDescriptionFromArray(allWindowIDs);
for (CFIndex idx=0; idx<CFArrayGetCount(windowDescs); idx++)
{
CFDictionaryRef dict = CFArrayGetValueAtIndex(windowDescs, idx);
CFStringRef ownerName = CFDictionaryGetValue(dict, kCGWindowOwnerName);
NSLog(#"owner name = %#", ownerName);
if (CFStringCompare(ownerName, CFSTR("Dock"), 0) == kCFCompareEqualTo )
{
// the Dock level has the dock and the desktop picture
CGRect windowBounds;
BOOL success = CGRectMakeWithDictionaryRepresentation(
(CFDictionaryRef)(CFDictionaryGetValue(dict, kCGWindowBounds)),
&windowBounds);
NSRect screenBounds = screen.frame;
CFNumberRef windowLayer = CFDictionaryGetValue(dict, kCGWindowLayer);
//CFDictionaryGetValue(dict, kCGWindowBounds)
NSLog(#"window bounds %f, %f, matches screen bounds? %d, on level %#",
windowBounds.size.width,
windowBounds.size.height,
CGRectEqualToRect(windowBounds, screenBounds),
windowLayer
);
NSNumber* ourDesiredLevelNumber = [NSNumber numberWithInt: kCGDesktopWindowLevel - 1];
if ( CGRectEqualToRect(windowBounds, screenBounds) && [ourDesiredLevelNumber isEqualToNumber: (__bridge NSNumber * _Nonnull)(windowLayer)] )
CFArrayAppendValue(windowIDs, CFArrayGetValueAtIndex(allWindowIDs, idx));
}
}
CFRelease(windowDescs);
CFRelease(allWindowIDs);
}
CGImageRef cgImage = CGWindowListCreateImageFromArray( screen.frame, windowIDs, kCGWindowImageDefault);
// Create a bitmap rep from the image...
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:cgImage];
// Create an NSImage and add the bitmap rep to it...
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
//[bitmapRep release];
return image;
}
Any Suggestions?
Thanks in Advance!

faster way to extract video frames to pixels on mac

I'm reading large video files, frame by frame, and hoping there is a faster way to get pixel buffers.
For those codecs which are supported I'm using AVFoundation APIs and then for the remainder (e.g. DNxHD) I'm using Quicktime techniques.
Here's a simplified version of my AVFoundation based approach.
Is there some API I have not noticed, or some attribute I could set to speed things up?
AVURLAsset *myAVURLAsset = [AVURLAsset URLAssetWithURL:#"blahblah" options:nil];
AVAssetTrack * videoTrack = [[myAVURLAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetReader *movieReader = [[AVAssetReader alloc] initWithAsset:myAVURLAsset error:nil];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:#(kCVPixelFormatType_32ARGB) forKey:#"kCVPixelBufferPixelFormatTypeKey"];
AVAssetReaderTrackOutput * output = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoSettings];
[movieReader addOutput:output];
[movieReader startReading];
bool atEnd = false;
while (!atEnd) {
CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
if (sampleBuffer)
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *data = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
// ...do lots of stuff with pixel data
// ...release and tidy up sampleBuffer etc
}
else{
atEnd = true;
}
}

Create a thumbnail or image of an AVPlayer at current time

I have implemented an AVPlayer and i want to take an image or thumbnail when clicking on a toolbar button and open in a new UIViewController with UIImageView. The image should be scaled exactly like the AVPlayer.
The segue is already working, i just have to implement that i get the image at the current play time.
Thanks!
Objective-C
AVAsset *asset = [AVAsset assetWithURL:sourceURL];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset];
CMTime time = CMTimeMake(1, 1);
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL];
UIImage *thumbnail = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); // CGImageRef won't be released by ARC
Swift
var asset = AVAsset.assetWithURL(sourceURL)
var imageGenerator = AVAssetImageGenerator(asset: asset!)
var time = CMTimeMake(1, 1)
var imageRef = try! imageGenerator!.copyCGImageAtTime(time, actualTime: nil)
var thumbnail = UIImage.imageWithCGImage(imageRef)
CGImageRelease(imageRef) // CGImageRef won't be released by ARC
Swift 3.0
var sourceURL = URL(string: "Your Asset URL")
var asset = AVAsset(url: sourceURL!)
var imageGenerator = AVAssetImageGenerator(asset: asset)
var time = CMTimeMake(1, 1)
var imageRef = try! imageGenerator.copyCGImage(at: time, actualTime: nil)
var thumbnail = UIImage(cgImage:imageRef)
Note : Interpret Swift code according to your swift version.
Try this
- (UIImage*)takeScreeenShot {
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:vidURL
options:nil];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
imageGenerator.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 60); // time range in which you want
screenshot
CGImageRef imgRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL
error:&err];
return [[UIImage alloc] initWithCGImage:imgRef];
}
Hope this helps !!!
Swift 2.x:
let asset = AVAsset(...)
let imageGenerator = AVAssetImageGenerator(asset: asset)
let screenshotTime = CMTime(seconds: 1, preferredTimescale: 1)
if let imageRef = try? imageGenerator.copyCGImageAtTime(screenshotTime, actualTime: nil) {
let image = UIImage(CGImage: imageRef)
// do something with your image
}
Add below code to generate thumbnail from video.
AVURLAsset *assetURL = [[AVURLAsset alloc] initWithURL:partOneUrl options:nil];
AVAssetImageGenerator *assetGenerator = [[AVAssetImageGenerator alloc] initWithAsset:assetURL];
assetGenerator.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 2);
CGImageRef imgRef = [assetGenerator copyCGImageAtTime:time actualTime:NULL error:&err];
UIImage *one = [[UIImage alloc] initWithCGImage:imgRef];
This is how I get a shot of the current visible frame on the scene in Swift:
The key is to
get the current time of the player which is of type CMTime
convert that time into seconds of type Float64
switch the secondss back to CMTime using CMTimeMake. The first parameter which would be where the seconds goes should be cast to Int64
Code:
var myImage: UIImage?
guard let player = player else { return }
let currentTime: CMTime = player.currentTime() // step 1.
let currentTimeInSecs: Float64 = CMTimeGetSeconds(currentTime) // step 2.
let actionTime: CMTime = CMTimeMake(Int64(currentTimeInSecs), 1) // step 3.
let asset = AVAsset(url: fileUrl)
let imageGenerator = AVAssetImageGenerator(asset: asset)
imageGenerator.appliesPreferredTrackTransform = true // prevent image rotation
do{
let imageRef = try imageGenerator.copyCGImage(at: actionTime, actualTime: nil)
myImage = UIImage(cgImage: imageRef)
}catch let err as NSError{
print(err.localizedDescription)
}
Swift extension for generating thumbnails from video
extension AVPlayer {
func generateThumbnail(time: CMTime) -> UIImage? {
guard let asset = currentItem?.asset else { return nil }
let imageGenerator = AVAssetImageGenerator(asset: asset)
do {
let cgImage = try imageGenerator.copyCGImage(at: time, actualTime: nil)
return UIImage(cgImage: cgImage)
} catch {
print(error.localizedDescription)
}
return nil
}
}
When you need to create multiple thumbnails at once the class AVAssetImageGenerator is golden, as it provides an async way.
If you need a Thumbnail-Image of the player's current frame, simply render it's View (platform specific) or its Layer (platform independent):
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGSize frameSize = _playerLayer.frame.size;
CGContextRef thumbnailContext = CGBitmapContextCreate(nil, frameSize.width, frameSize.height, 8, 0, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
[_playerLayer renderInContext:thumbnailContext];
CGImageRef playerThumbnail = CGBitmapContextCreateImage(thumbnailContext);
CGContextRelease(thumbnailContext);
This is super fast and works synchronously.
Code for 2022:
seconds = .. the normal human-meaning desired time position in the video
guard let pl = .. your player ..
guard let ite = pl.currentItem ..
let testGen = AVAssetImageGenerator(asset: ite.asset)
testGen.maximumSize = CGSize(width: 0, height: .. height of your preview box)
testGen.requestedTimeToleranceBefore = .zero // during development
// or something like ... CMTime(value: .. your tolerance .., timescale: 600)
testGen.requestedTimeToleranceAfter = .zero // during development
// ditto
if #available(tvOS 16, *) {
Task { [weak self] ..
do {
let ct = CMTime(value: CMTimeValue(seconds), timescale: 1)
// NOTE THE "1"
let (foundImage, foundTime) = try await testGen.image(at: ct)
let foundAsSecs = CMTimeGetSeconds(foundTime)
print("tried gen at \(seconds) found as \(foundAsSecs) \n")
self. .. your preview .image = UIImage(cgImage: foundImage)
} catch {
print("gen err \(error)")
}
}
}
Setting the two tolerances is a sophisticated issue, google.
Watch out for the gotchya where timescale of 1 is needed for the CMTime.

How to get nowplaying infomation in Thirdparty music application?

I want to get nowplaying infomation.
So, following this code:
NSDictionary *info = [[MPNowPlayingInfoCenter defaultCenter] nowPlayingInfo];
NSString *title = [info valueForKey:MPMediaItemPropertyTitle];
NSLog(#"%#",title);
MPMusicPlayerController *pc = [MPMusicPlayerController iPodMusicPlayer];
MPMediaItem *playingItem = [pc nowPlayingItem];
if (playingItem) {
NSInteger mediaType = [[playingItem valueForProperty:MPMediaItemPropertyMediaType] integerValue];
if (mediaType == MPMediaTypeMusic) {
NSString *songTitle = [playingItem valueForProperty:MPMediaItemPropertyTitle];
NSString *albumTitle = [playingItem valueForProperty:MPMediaItemPropertyAlbumTitle];
NSString *artist = [playingItem valueForProperty:MPMediaItemPropertyArtist];
NSString *genre = [playingItem valueForProperty:MPMediaItemPropertyGenre];
TweetTextField.text = [NSString stringWithFormat:#"#nowplaying %# - %# / %# #%#", artist, songTitle, albumTitle,genre];
MPMediaItemArtwork *artwork = [playingItem valueForProperty:MPMediaItemPropertyArtwork];
CGSize newSize = CGSizeMake(250, 250);
UIGraphicsBeginImageContext(newSize);
[[artwork imageWithSize:CGSizeMake(100.0, 100.0)] drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
_imageView.image = newImage;
}
if (_imageView.image == nil){
} else {
_tableView.alpha=0.5;
}
}
But this code can get nowplaying infomation from Defautl iPod Application.
How to get nowplaying infomation in Thirdparty music application?
(e.g.: Mobile Safari, Youtube App, gMusic, Melodies etc).
I think this isn't possible. The documentation states that MPNowPlayingInfoCenter is only for setting information on the lock screen.
Here is a related question.

defaultRepresentation fullScreenImage on ALAsset does not return full screen image

In my application I save images to an album as assets. I want also to retrieve them and display them in full screen. I use the following code :
ALAsset *lastPicture = [scrollArray objectAtIndex:iAsset];
ALAssetRepresentation *defaultRep = [lastPicture defaultRepresentation];
UIImage *image = [UIImage imageWithCGImage:[defaultRep fullScreenImage]
scale:[defaultRep scale] orientation:
(UIImageOrientation)[defaultRep orientation]];
The problem is that the image returned is nil. I have read at the ALAssetRepresentation reference that when the image does not fit it is returned nil.
I put this image to an UIImageView which has the size of the iPad screen. I was wondering if you could help me with this issue?
Thank you in advance.
I'm not a fan of fullScreenImage or fullResolutionImage. I found that when you do this on multiple assets in a queue, even if you release the UIImage immediately, memory usage will increase dramatically while it shouldn't. Also when using fullScreenImage or fullResolutionImage, the UIImage returned is still compressed, meaning that it will be decompressed before being drawn for the first time, thus on the main thread which will block your UI.
I prefer to use this method.
-(UIImage *)fullSizeImageForAssetRepresentation:(ALAssetRepresentation *)assetRepresentation
{
UIImage *result = nil;
NSData *data = nil;
uint8_t *buffer = (uint8_t *)malloc(sizeof(uint8_t)*[assetRepresentation size]);
if (buffer != NULL) {
NSError *error = nil;
NSUInteger bytesRead = [assetRepresentation getBytes:buffer fromOffset:0 length:[assetRepresentation size] error:&error];
data = [NSData dataWithBytes:buffer length:bytesRead];
free(buffer);
}
if ([data length])
{
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((__bridge CFDataRef)data, nil);
NSMutableDictionary *options = [NSMutableDictionary dictionary];
[options setObject:(id)kCFBooleanTrue forKey:(id)kCGImageSourceShouldAllowFloat];
[options setObject:(id)kCFBooleanTrue forKey:(id)kCGImageSourceCreateThumbnailFromImageAlways];
[options setObject:(id)[NSNumber numberWithFloat:640.0f] forKey:(id)kCGImageSourceThumbnailMaxPixelSize];
//[options setObject:(id)kCFBooleanTrue forKey:(id)kCGImageSourceCreateThumbnailWithTransform];
CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(sourceRef, 0, (__bridge CFDictionaryRef)options);
if (imageRef) {
result = [UIImage imageWithCGImage:imageRef scale:[assetRepresentation scale] orientation:(UIImageOrientation)[assetRepresentation orientation]];
CGImageRelease(imageRef);
}
if (sourceRef)
CFRelease(sourceRef);
}
return result;
}
You can use it like this:
// Get the full image in a background thread
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
UIImage* image = [self fullSizeImageForAssetRepresentation:asset.defaultRepresentation];
dispatch_async(dispatch_get_main_queue(), ^{
// Do something with the UIImage
});
});