Create a thumbnail or image of an AVPlayer at current time - objective-c

I have implemented an AVPlayer and i want to take an image or thumbnail when clicking on a toolbar button and open in a new UIViewController with UIImageView. The image should be scaled exactly like the AVPlayer.
The segue is already working, i just have to implement that i get the image at the current play time.
Thanks!

Objective-C
AVAsset *asset = [AVAsset assetWithURL:sourceURL];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset];
CMTime time = CMTimeMake(1, 1);
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL];
UIImage *thumbnail = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); // CGImageRef won't be released by ARC
Swift
var asset = AVAsset.assetWithURL(sourceURL)
var imageGenerator = AVAssetImageGenerator(asset: asset!)
var time = CMTimeMake(1, 1)
var imageRef = try! imageGenerator!.copyCGImageAtTime(time, actualTime: nil)
var thumbnail = UIImage.imageWithCGImage(imageRef)
CGImageRelease(imageRef) // CGImageRef won't be released by ARC
Swift 3.0
var sourceURL = URL(string: "Your Asset URL")
var asset = AVAsset(url: sourceURL!)
var imageGenerator = AVAssetImageGenerator(asset: asset)
var time = CMTimeMake(1, 1)
var imageRef = try! imageGenerator.copyCGImage(at: time, actualTime: nil)
var thumbnail = UIImage(cgImage:imageRef)
Note : Interpret Swift code according to your swift version.

Try this
- (UIImage*)takeScreeenShot {
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:vidURL
options:nil];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
imageGenerator.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 60); // time range in which you want
screenshot
CGImageRef imgRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL
error:&err];
return [[UIImage alloc] initWithCGImage:imgRef];
}
Hope this helps !!!

Swift 2.x:
let asset = AVAsset(...)
let imageGenerator = AVAssetImageGenerator(asset: asset)
let screenshotTime = CMTime(seconds: 1, preferredTimescale: 1)
if let imageRef = try? imageGenerator.copyCGImageAtTime(screenshotTime, actualTime: nil) {
let image = UIImage(CGImage: imageRef)
// do something with your image
}

Add below code to generate thumbnail from video.
AVURLAsset *assetURL = [[AVURLAsset alloc] initWithURL:partOneUrl options:nil];
AVAssetImageGenerator *assetGenerator = [[AVAssetImageGenerator alloc] initWithAsset:assetURL];
assetGenerator.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 2);
CGImageRef imgRef = [assetGenerator copyCGImageAtTime:time actualTime:NULL error:&err];
UIImage *one = [[UIImage alloc] initWithCGImage:imgRef];

This is how I get a shot of the current visible frame on the scene in Swift:
The key is to
get the current time of the player which is of type CMTime
convert that time into seconds of type Float64
switch the secondss back to CMTime using CMTimeMake. The first parameter which would be where the seconds goes should be cast to Int64
Code:
var myImage: UIImage?
guard let player = player else { return }
let currentTime: CMTime = player.currentTime() // step 1.
let currentTimeInSecs: Float64 = CMTimeGetSeconds(currentTime) // step 2.
let actionTime: CMTime = CMTimeMake(Int64(currentTimeInSecs), 1) // step 3.
let asset = AVAsset(url: fileUrl)
let imageGenerator = AVAssetImageGenerator(asset: asset)
imageGenerator.appliesPreferredTrackTransform = true // prevent image rotation
do{
let imageRef = try imageGenerator.copyCGImage(at: actionTime, actualTime: nil)
myImage = UIImage(cgImage: imageRef)
}catch let err as NSError{
print(err.localizedDescription)
}

Swift extension for generating thumbnails from video
extension AVPlayer {
func generateThumbnail(time: CMTime) -> UIImage? {
guard let asset = currentItem?.asset else { return nil }
let imageGenerator = AVAssetImageGenerator(asset: asset)
do {
let cgImage = try imageGenerator.copyCGImage(at: time, actualTime: nil)
return UIImage(cgImage: cgImage)
} catch {
print(error.localizedDescription)
}
return nil
}
}

When you need to create multiple thumbnails at once the class AVAssetImageGenerator is golden, as it provides an async way.
If you need a Thumbnail-Image of the player's current frame, simply render it's View (platform specific) or its Layer (platform independent):
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGSize frameSize = _playerLayer.frame.size;
CGContextRef thumbnailContext = CGBitmapContextCreate(nil, frameSize.width, frameSize.height, 8, 0, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
[_playerLayer renderInContext:thumbnailContext];
CGImageRef playerThumbnail = CGBitmapContextCreateImage(thumbnailContext);
CGContextRelease(thumbnailContext);
This is super fast and works synchronously.

Code for 2022:
seconds = .. the normal human-meaning desired time position in the video
guard let pl = .. your player ..
guard let ite = pl.currentItem ..
let testGen = AVAssetImageGenerator(asset: ite.asset)
testGen.maximumSize = CGSize(width: 0, height: .. height of your preview box)
testGen.requestedTimeToleranceBefore = .zero // during development
// or something like ... CMTime(value: .. your tolerance .., timescale: 600)
testGen.requestedTimeToleranceAfter = .zero // during development
// ditto
if #available(tvOS 16, *) {
Task { [weak self] ..
do {
let ct = CMTime(value: CMTimeValue(seconds), timescale: 1)
// NOTE THE "1"
let (foundImage, foundTime) = try await testGen.image(at: ct)
let foundAsSecs = CMTimeGetSeconds(foundTime)
print("tried gen at \(seconds) found as \(foundAsSecs) \n")
self. .. your preview .image = UIImage(cgImage: foundImage)
} catch {
print("gen err \(error)")
}
}
}
Setting the two tolerances is a sophisticated issue, google.
Watch out for the gotchya where timescale of 1 is needed for the CMTime.

Related

Get Thumbnail Image from PHAsset

I want to get the Thumbnail Image of my PHAsset. I already extracted a PHAsset from the Photo Library and want to get the Thumbnail Image now.
Can you help me in Objective-C?
Thanks!
In case someone is looking for a swift solution, here is an extension:
extension PHAsset {
var thumbnailImage : UIImage {
get {
let manager = PHImageManager.default()
let option = PHImageRequestOptions()
var thumbnail = UIImage()
option.isSynchronous = true
manager.requestImage(for: self, targetSize: CGSize(width: 300, height: 300), contentMode: .aspectFit, options: option, resultHandler: {(result, info)->Void in
thumbnail = result!
})
return thumbnail
}
}
}
The PHImageManagerClass has the method:
- requestImageForAsset:targetSize:contentMode:options:resultHandler:
Here is a complete answer for Swift 4 showing the function & call against it. Also, make sure you have the photos privacy flag set in your plist.
import Photos
func requestImage(for asset: PHAsset,
targetSize: CGSize,
contentMode: PHImageContentMode,
completionHandler: #escaping (UIImage?) -> ()) {
let imageManager = PHImageManager()
imageManager.requestImage(for: asset,
targetSize: targetSize,
contentMode: contentMode,
options: nil) { (image, _) in
completionHandler(image)
}
}
let asset = // your existing PHAsset
let targetSize = CGSize(width: 100, height: 100)
let contentModel = PHImageContentMode.aspectFit
requestImage(for: asset, targetSize: targetSize, contentMode: contentModel, completionHandler: { image in
// Do something with your image if it exists
})
PHImageRequestOptions *options = [[PHImageRequestOptions alloc] init];
options.resizeMode = PHImageRequestOptionsResizeModeExact;
NSInteger retinaMultiplier = [UIScreen mainScreen].scale;
CGSize retinaSquare = CGSizeMake(imageView.bounds.size.width * retinaMultiplier, imageView.bounds.size.height * retinaMultiplier);
[[PHImageManager defaultManager]
requestImageForAsset:(PHAsset *)_asset
targetSize:retinaSquare
contentMode:PHImageContentModeAspectFill
options:options
resultHandler:^(UIImage *result, NSDictionary *info) {
imageView.image =[UIImage imageWithCGImage:result.CGImage scale:retinaMultiplier orientation:result.imageOrientation];
}];
i get this answer from How to fetch squared thumbnails from PHImageManager?

How to get Image size from URL in ios

How can I get the size(height/width) of an image from URL in objective-C? I want my container size according to the image. I am using AFNetworking 3.0.
I could use SDWebImage if it fulfills my requirement.
Knowing the size of an image before actually loading it can be necessary in a number of cases. For example, setting the height of a tableView cell in the heightForRowAtIndexPath method while loading the actual image later in the cellForRowAtIndexPath (this is a very frequent catch 22).
One simple way to do it, is to read the image header from the server URL using the Image I/O interface:
#import <ImageIO/ImageIO.h>
NSMutableString *imageURL = [NSMutableString stringWithFormat:#"http://www.myimageurl.com/image.png"];
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)[NSURL URLWithString:imageURL], NULL);
NSDictionary* imageHeader = (__bridge NSDictionary*) CGImageSourceCopyPropertiesAtIndex(source, 0, NULL);
NSLog(#"Image header %#",imageHeader);
NSLog(#"PixelHeight %#",[imageHeader objectForKey:#"PixelHeight"]);
Swift 4.x
Xcode 12.x
func sizeOfImageAt(url: URL) -> CGSize? {
// with CGImageSource we avoid loading the whole image into memory
guard let source = CGImageSourceCreateWithURL(url as CFURL, nil) else {
return nil
}
let propertiesOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, propertiesOptions) as? [CFString: Any] else {
return nil
}
if let width = properties[kCGImagePropertyPixelWidth] as? CGFloat,
let height = properties[kCGImagePropertyPixelHeight] as? CGFloat {
return CGSize(width: width, height: height)
} else {
return nil
}
}
Use Asynchronous mechanism called GCD in iOS to dowload image without affecting your main thread.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Download IMAGE using URL
NSData *data = [[NSData alloc]initWithContentsOfURL:URL];
// COMPOSE IMAGE FROM NSData
UIImage *image = [[UIImage alloc]initWithData:data];
dispatch_async(dispatch_get_main_queue(), ^{
// UI UPDATION ON MAIN THREAD
// Calcualte height & width of image
CGFloat height = image.size.height;
CGFloat width = image.size.width;
});
});
For Swift 4 use this:
let imageURL = URL(string: post.imageBigPath)!
let source = CGImageSourceCreateWithURL(imageURL as CFURL,
let imageHeader = CGImageSourceCopyPropertiesAtIndex(source!, 0, nil)! as NSDictionary;
print("Image header: \(imageHeader)")
The header would looks like:
Image header: {
ColorModel = RGB;
Depth = 8;
PixelHeight = 640;
PixelWidth = 640;
"{Exif}" = {
PixelXDimension = 360;
PixelYDimension = 360;
};
"{JFIF}" = {
DensityUnit = 0;
JFIFVersion = (
1,
0,
1
);
XDensity = 72;
YDensity = 72;
};
"{TIFF}" = {
Orientation = 0;
}; }
So u can get from it the Width, Height.
you can try like this:
NSData *data = [[NSData alloc]initWithContentsOfURL:URL];
UIImage *image = [[UIImage alloc]initWithData:data];
CGFloat height = image.size.height;
CGFloat width = image.size.width;

Objective-C to Swift Conversion Issue on CIHistogramDisplayFilter

I have this code snippet to display a histogram in Objective-C, which works fine. I am, however, having a hard time converting it to Swift.
Objective-C
//Show Histogram
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(__bridge NSDictionary *)attachments];
NSUInteger count = 256;
count = count <= 256 ? count : 256;
count = count >= 1 ? count : 1;
NSDictionary *params = #{kCIInputImageKey: ciImage,
kCIInputExtentKey: [CIVector vectorWithCGRect:[ciImage extent]],
#"inputCount": #(256), #"inputScale": #(200)
};
CIFilter *filter = [CIFilter filterWithName:#"CIAreaHistogram"
withInputParameters:params];
CIImage *outImage = [filter outputImage];
//---------------------------------------------
CIContext *context = [CIContext contextWithOptions:nil];
NSDictionary *params2 = #{
kCIInputImageKey: outImage
};
CIFilter *filter2 = [CIFilter filterWithName:#"CIHistogramDisplayFilter"
withInputParameters:params2];
CIImage *outputImage = [filter2 outputImage];
CGRect outExtent = [outputImage extent];
CGImageRef cgImage = [context createCGImage:outputImage
fromRect:outExtent];
UIImage *outImage2 = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
// resize
UIImage *resized = [self resizeImage:outImage2
withQuality:kCGInterpolationNone
rate:2.5];
//Remove the default grey background
resized = [self removeColorFromImage:resized grayLevel:137];
dispatch_async(dispatch_get_main_queue(),
^{
self.histogramView.image = resized;
});
While converting it to Swift, I started getting these errors:
var pixelBuffer: CVPixelBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)!
var attachments: CFDictionaryRef = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)!
var ciImage: CIImage = CIImage(cVPixelBuffer: pixelBuffer, options: (attachments as! [NSObject : AnyObject]))
That last line gives me the error:
Argument labels '(cVPixelBuffer:, options:)' do not match any available overloads
//Show Histogram -- Swift version
var pixelBuffer: CVPixelBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)!
var attachments: CFDictionaryRef = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)!
var ciImage: CIImage = CIImage(CVPixelBuffer: pixelBuffer, options: attachments as? [String : AnyObject])
var count = 256
count = count <= 256 ? count : 256
count = count >= 1 ? count : 1
let params = [kCIInputImageKey: ciImage,
kCIInputExtentKey:CIVector.init(CGRect: ciImage.extent),
"inputCount":256, "inputScale":200]
let filter:CIFilter! = CIFilter(name: "CIAreaHistogram" , withInputParameters: params)
let outImage: CIImage = (filter?.outputImage)!
//---------------------------------------------
let context:CIContext = CIContext(options: nil)
let params2 = [kCIInputImageKey: outImage]
let filter2 = CIFilter(name: "CIHistogramDisplayFilter",withInputParameters: params2)
let outputImage:CIImage = filter2!.outputImage!
let outExtent:CGRect = outputImage.extent
let cgImage:CGImageRef = context.createCGImage(outputImage, fromRect: outExtent)
let outImage2:UIImage = UIImage(CGImage: cgImage)
// resize
let resized:UIImage = self.resizeImage(outImage2,withQuality:CGInterpolation.None,rate:2.5)
//Remove the default grey background
resized = self.removeColorFromImage(resized, grayLevel:137)
dispatch_async(dispatch_get_main_queue()) {
self.histogramView.image = resized
}

objective c - AvAssetReader and Writer to overlay video

I am trying to overlay a recorded video with AvAssetReader and AvAssetWriter with some images. Following this tutorial, I am able to copy a video (and audio) into a new file. Now my objective is to overlay some of the initial video frames with some images with this code:
while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]}];
UIFont *font = [UIFont fontWithName:#"Helvetica" size:40];
NSDictionary *attributes = #{NSFontAttributeName:font, NSForegroundColorAttributeName:[UIColor lightTextColor]};
UIImage *img = [self imageFromText:#"test" :attributes];
CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage];
[ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (sampleBuffer != NULL)
{
BOOL success = [assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
And to create image from text:
-(UIImage *)imageFromText:(NSString *)text :(NSDictionary *)attributes{
CGSize size = [text sizeWithAttributes:attributes];
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
[text drawAtPoint:CGPointMake(0.0, 0.0) withAttributes:attributes];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
The video and audio are copied, but I haven't any text on my video.
Question 1: Why this code is not working?
Moreover, I want to be able to check the timecode of the current read frame. For example I would like to insert a text with the current timecode in the video.
I try this code following this tutorial:
AVAsset *localAsset = [AVAsset assetWithURL:mURL];
NSError *localError;
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:localAsset error:&localError];
BOOL success = (assetReader != nil);
// Create asset reader output for the first timecode track of the asset
if (success) {
AVAssetTrack *timecodeTrack = nil;
// Grab first timecode track, if the asset has them
NSArray *timecodeTracks = [localAsset tracksWithMediaType:AVMediaTypeTimecode];
if ([timecodeTracks count] > 0)
timecodeTrack = [timecodeTracks objectAtIndex:0];
if (timecodeTrack) {
AVAssetReaderTrackOutput *timecodeOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:timecodeTrack outputSettings:nil];
[assetReader addOutput:timecodeOutput];
} else {
NSLog(#"%# has no timecode tracks", localAsset);
}
}
But I get the log:
[...] has no timecode tracks
Question 2: Why my video hasn't any AVMediaTypeTimecode? Ad so how can I get the current frame timecode?
Thanks for your help
I found the solutions:
To overlay video frames, you need to fix the decompression settings:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* decompressionVideoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
To get the frame timestamp, you have to read the video informations and then use a counter to increment the current timestamp:
durationSeconds = CMTimeGetSeconds(asset.duration);
timePerFrame = 1.0 / (Float64)assetVideoTrack.nominalFrameRate;
totalFrames = durationSeconds * assetVideoTrack.nominalFrameRate;
Then in this loop
while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
You can found the timestamp:
CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL){
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer) {
Float64 secondsIn = ((float)counter/totalFrames)*durationSeconds;
CMTime imageTimeEstimate = CMTimeMakeWithSeconds(secondsIn, 600);
mergeTime = CMTimeGetSeconds(imageTimeEstimate);
counter++;
}
}
I hope it could help!

How to retrieve the most recent photo from Camera Roll on iOS?

I'm having a hard figuring out how to programmatically retrieve the most recent photo in the camera roll without user intervention. To be clear, I do not want to use an Image Picker, I want the app to automatically grab the newest photo when the app opens.
I know this is possible because Ive seen a similar app do it, but I cant seem to find any info on it.
One way is to use AssetsLibrary and use n - 1 as the index for enumeration.
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
[assetsLibrary enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos
usingBlock:^(ALAssetsGroup *group, BOOL *stop) {
if (nil != group) {
// be sure to filter the group so you only get photos
[group setAssetsFilter:[ALAssetsFilter allPhotos]];
if (group.numberOfAssets > 0) {
[group enumerateAssetsAtIndexes:[NSIndexSet indexSetWithIndex:group.numberOfAssets - 1]
options:0
usingBlock:^(ALAsset *result, NSUInteger index, BOOL *stop) {
if (nil != result) {
ALAssetRepresentation *repr = [result defaultRepresentation];
// this is the most recent saved photo
UIImage *img = [UIImage imageWithCGImage:[repr fullResolutionImage]];
// we only need the first (most recent) photo -- stop the enumeration
*stop = YES;
}
}];
}
}
*stop = NO;
} failureBlock:^(NSError *error) {
NSLog(#"error: %#", error);
}];
Instead of messing with the index, you can enumerate through the list in reverse. This pattern works well if you want the most recent image or if you want to list the images in a UICollectionView with the most recent image first.
Example to return the most recent image:
[group enumerateAssetsWithOptions:NSEnumerationReverse usingBlock:^(ALAsset *asset, NSUInteger index, BOOL *stop) {
if (asset) {
ALAssetRepresentation *repr = [asset defaultRepresentation];
UIImage *img = [UIImage imageWithCGImage:[repr fullResolutionImage]];
*stop = YES;
}
}];
In iOS 8, Apple added the Photos library which makes for easier querying. In iOS 9, ALAssetLibrary is deprecated.
Here's some Swift code to get the most recent photo taken with that framework.
import UIKit
import Photos
struct LastPhotoRetriever {
func queryLastPhoto(resizeTo size: CGSize?, queryCallback: (UIImage? -> Void)) {
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: false)]
// fetchOptions.fetchLimit = 1 // This is available in iOS 9.
if let fetchResult = PHAsset.fetchAssetsWithMediaType(PHAssetMediaType.Image, options: fetchOptions) {
if let asset = fetchResult.firstObject as? PHAsset {
let manager = PHImageManager.defaultManager()
// If you already know how you want to resize,
// great, otherwise, use full-size.
let targetSize = size == nil ? CGSize(width: asset.pixelWidth, height: asset.pixelHeight) : size!
// I arbitrarily chose AspectFit here. AspectFill is
// also available.
manager.requestImageForAsset(asset,
targetSize: targetSize,
contentMode: .AspectFit,
options: nil,
resultHandler: { image, info in
queryCallback(image)
})
}
}
}
}
Swift 3.0:
1) Import Photos framework in your header before your class declaration.
import Photos
2) Add the following method, which returns the last image.
func queryLastPhoto(resizeTo size: CGSize?, queryCallback: #escaping ((UIImage?) -> Void)) {
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: false)]
let requestOptions = PHImageRequestOptions()
requestOptions.isSynchronous = true
let fetchResult = PHAsset.fetchAssets(with: PHAssetMediaType.image, options: fetchOptions)
if let asset = fetchResult.firstObject {
let manager = PHImageManager.default()
let targetSize = size == nil ? CGSize(width: asset.pixelWidth, height: asset.pixelHeight) : size!
manager.requestImage(for: asset,
targetSize: targetSize,
contentMode: .aspectFit,
options: requestOptions,
resultHandler: { image, info in
queryCallback(image)
})
}
}
3) Then call this method somewhere in your app (maybe a button action):
#IBAction func pressedLastPictureAttachmentButton(_ sender: Any) {
queryLastPhoto(resizeTo: nil){
image in
print(image)
}
}
To add to Art Gillespie's answer, using the fullResolutionImage uses the original image which — depending on the device's orientation when taking the photo — could leave you with an upside down, or -90° image.
To get the modified, but optimised image for this, use fullScreenImage instead....
UIImage *img = [UIImage imageWithCGImage:[repr fullScreenImage]];
Answer to the question (in Swift):
func pickingTheLastImageFromThePhotoLibrary() {
let assetsLibrary: ALAssetsLibrary = ALAssetsLibrary()
assetsLibrary.enumerateGroupsWithTypes(ALAssetsGroupSavedPhotos,
usingBlock: { (let group: ALAssetsGroup!, var stop: UnsafeMutablePointer<ObjCBool>) -> Void in
if (group != nil) {
// Be sure to filter the group so you only get photos
group.setAssetsFilter(ALAssetsFilter.allPhotos())
group.enumerateAssetsWithOptions(NSEnumerationOptions.Reverse,
usingBlock: { (let asset: ALAsset!,
let index: Int,
var stop: UnsafeMutablePointer<ObjCBool>)
-> Void in
if(asset != nil) {
/*
Returns a CGImage representation of the asset.
Using the fullResolutionImage uses the original image which — depending on the
device's orientation when taking the photo — could leave you with an upside down,
or -90° image. To get the modified, but optimised image for this, use
fullScreenImage instead.
*/
// let myCGImage: CGImage! = asset.defaultRepresentation().fullResolutionImage().takeUnretainedValue()
/*
Returns a CGImage of the representation that is appropriate for displaying full
screen.
*/
// let myCGImage: CGImage! = asset.defaultRepresentation().fullScreenImage().takeUnretainedValue()
/* Returns a thumbnail representation of the asset. */
let myCGImage: CGImage! = asset.thumbnail().takeUnretainedValue()
// Here we set the image included in the UIImageView
self.myUIImageView.image = UIImage(CGImage: myCGImage)
stop.memory = ObjCBool(true)
}
})
}
stop.memory = ObjCBool(false)
})
{ (let error: NSError!) -> Void in
println("A problem occurred: \(error.localizedDescription)")
}
}
Using Photos Library ( Objective-C )
PHFetchOptions *fetchOptions = [[PHFetchOptions alloc] init];
fetchOptions.sortDescriptors = #[[NSSortDescriptor sortDescriptorWithKey:#"creationDate" ascending:NO]];
PHFetchResult *assetsFetchResult = [PHAsset fetchAssetsInAssetCollection:assetCollection options:fetchOptions];
if (assetsFetchResult.count>0) {
PHAsset *asset = [assetsFetchResult objectAtIndex:0];
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat dimension = 55.0f; // set your required size
CGSize size = CGSizeMake(dimension*scale, dimension*scale);
[[PHImageManager defaultManager] requestImageForAsset:asset
targetSize:size
contentMode:PHImageContentModeAspectFit
options:nil
resultHandler:^(UIImage *result, NSDictionary *info) {
// do your thing with the image
}
];
}