load image fastly from url - objective-c

I have used SDWebimages classes for loading the image from url to image view. But it is still taking time to load.
NSString *filePath1 = [NSString stringWithFormat:#"%#",pathimage1];
[cell.image1 setImageWithURL:[NSURL URLWithString:filePath1]placeholderImage:[UIImage imageNamed:#"placeholder.png"]];
Can anyone tell me what I am doing wrong? Or the best way to load the image from url.

Try this :
NSString *filePath1 = [NSString stringWithFormat:#"%#",pathimage1];
[cell.image1 setImageWithURL:[NSURL URLWithString:filePath1]]
placeholderImage:[UIImage imageNamed:#"placeholder.png"]
success:^(UIImage *image, BOOL cached)
{
dispatch_async(dispatch_get_main_queue(), ^{
cell.image1.image = image;
});
NSLog(#"success Block");
}
failure:^(NSError *error)
{
NSLog(#"failure Block");
}];

Loading images from the network may take a long time to load for several reasons, two of the most obvious being
Slow connection to the internet on the device (Wifi / 3G / 2G).
Overloaded server.
To speed things up if you have control over the server and the images that are stored on it you could always load a smaller version (low quality preview/thumbnail) initially, followed by loading a larger / full-size version when needed:
typedef enum {
MyImageSizeSmall,
MyImageSizeMedium,
MyImageSizeLarge
} MyImageSize;
-(void)requestAndLoadImageWithSize:(MyImageSize)imageSize intoImageView:(UIImageView *)imageView
{
NSString *imagePath
switch (imageSize)
{
case MyImageSizeSmall:
imagePath = #"http://path.to/small_image.png";
break;
case MyImageSizeMedium:
imagePath = #"http://path.to/medium_image.png";
break;
case MyImageSizeLarge:
imagePath = #"http://path.to/large_image.png";
break;
}
[imageView setImageWithURL:[NSURL URLWithString:imagePath]
placeholderImage:[UIImage imageNamed:#"placeholder.png"]];
}

Related

when i use SDWebimage to load image,i want to know how the image is loaded,but the Cell is nonfluency

I want to achieve when image is not loaded, the imageView load default image. But when the image is loaded,the imageView load the url-image. Now the problem is that the Sington (SDImageCache) should cause nonfluency. When I shield the Sington (SDImageCache), it's ok.
BOOL isInCache = ([[SDImageCache sharedImageCache]
imageFromCacheForKey:imgUrlStr] != nil);
if (isInCache) {
[self.imgView sd_setImageWithURL:[NSURL URLWithString:imgUrlStr]
placeholderImage:[UIImage imageNamed:#"defaultImg"]
options:SDWebImageRetryFailed];
} else {
self.imgView.image = [UIImage imageNamed:#"defaultImg"];
}
I am using below code in my application.
[IMAGE_VIEW_OBJECT_NAME sd_setImageWithURL:IMAGE_URL placeholderImage:PLACEHOLDER_IMAGE completed:^(UIImage *image, NSError *error, SDImageCacheType cacheType, NSURL *imageURL) {
if(error)
{
_imgUserPic.image = PLACEHOLDER_IMAGE;
}
else
{
_imgUserPic.image = image;
}
}];
Its help me to show default image when the image is not available on the main URL.
Hope it will work for you.

How to implement lazy loading of image without using SDImageCache?

How to implement lazy loading of image without using below link:
https://github.com/rs/SDWebImage
Note: I don't want to use any third party tool. I want to do from my side.
If you don't want to use any library (some are MIT or public domain licensed, so I think the only reason not to use them is that you want to learn how to build it yourself).
Here is how to do it with a simple and effective way:
1 : Put a temporary placeholder image in your imageView.
2 : Get your image in background thread.
2-a : If you want the cache feature, search for a cached image.
2-b : If no cache feature or no cached image, get your image from its source.
2-c : If cache feature, save the image to cache.
3 : In main thread show your image in the imageView.
Pseudo Code : (I wrote it on the go, it is not meant to run and it may have errors, sorry for that).
-(void) lazilyLoadImageFromURL :(NSURL *)url{
imageView.image = [UIImage imageNamed:#"Placeholder.png];
if([self cachedImageAvailableForURL:url){
imageView.image= [self cachedImageForURL:url];
}
else{
NSOperationQueue *queue = [NSOperationQueue mainQueue];
NSURLRequest *urlRequest = [NSURLRequest requestWithURL:url];
[NSURLConnection sendAsynchronousRequest:urlRequest queue:queue completionHandler:^(NSURLResponse * resp, NSData *data, NSError *error)
{
dispatch_async(dispatch_get_main_queue(),^
{
if ( error == nil && data )
{
UIImage *urlImage = [[UIImage alloc] initWithData:data];
imageView.image = urlImage;
[self saveImageInCache:image forURL:url];
}
});
}];
}
}
-(BOOL) cachedImageAvailableForURL:(NSURL*):url{
// check if there is a saved cached image for this url
}
-(UIImage *) cachedImageForURL:(NSURL*):url{
// returns the cached image for that url
}
-(void) saveImageInCache:(UIImage*) image forURL:(NSURL*)url{
// saves the image in cache for the url
}
Of course, this is only ONE POSSIBLE WAY to do it. I tried to make it simple, but there are plenty better and more complicated ways to do it.
Try it:
NSURL *imageURL = [NSURL URLWithString:#"www...."];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
NSData *imageData = [NSData dataWithContentsOfURL:imageURL];
dispatch_async(dispatch_get_main_queue(), ^{
// Update the UI
self.imgVWprofile.image=[UIImage imageWithData:imageData];
});
});

AVFoundation - why can't I get the video orientation right

I am using AVCaptureSession to capture video from a devices camera and then using AVAssetWriterInput and AVAssetTrack to compress/resize the video before uploading it to a server. The final videos will be viewed on the web via an html5 video element.
I'm running into multiple issues trying to get the orientation of the video correct. My app only supports landscape orientation and all captured videos should be in landscape orientation. However, I would like to allow the user to hold their device in either landscape direction (i.e. home button on either the left or the right hand side).
I am able to make the video preview show in the correct orientation with the following line of code
_previewLayer.connection.videoOrientation = UIDevice.currentDevice.orientation;
The problems start when processing the video via AVAssetWriterInput and friends. The result does not seem to account for the left vs. right landscape mode the video was captured in. IOW, sometimes the video comes out upside down. After some googling I found many people suggesting that the following line of code would solve this issue
writerInput.transform = videoTrack.preferredTransform;
...but this doesn't seem to work. After a bit of debugging I found that videoTrack.preferredTransform is always the same value, regardless of the orientation the video was captured in.
I tried manually tracking what orientation the video was captured in and setting the writerInput.transform to CGAffineTransformMakeRotation(M_PI) as needed. Which solved the problem!!!
...sorta
When I viewed the results on the device this solution worked as expected. Videos were right-side-up regardless of left vs. right orientation while recording. Unfortunately, when I viewed the exact same videos in another browser (chrome on a mac book) they were all upside-down!?!?!?
What am I doing wrong?
EDIT
Here's some code, in case it's helpful...
-(void)compressFile:(NSURL*)inUrl;
{
NSString* fileName = [#"compressed." stringByAppendingString:inUrl.lastPathComponent];
NSError* error;
NSURL* outUrl = [PlatformHelper getFilePath:fileName error:&error];
NSDictionary* compressionSettings = #{ AVVideoProfileLevelKey: AVVideoProfileLevelH264Main31,
AVVideoAverageBitRateKey: [NSNumber numberWithInt:2500000],
AVVideoMaxKeyFrameIntervalKey: [NSNumber numberWithInt: 30] };
NSDictionary* videoSettings = #{ AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:1280],
AVVideoHeightKey: [NSNumber numberWithInt:720],
AVVideoScalingModeKey: AVVideoScalingModeResizeAspectFill,
AVVideoCompressionPropertiesKey: compressionSettings };
NSDictionary* videoOptions = #{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] };
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
writerInput.expectsMediaDataInRealTime = YES;
AVAssetWriter* assetWriter = [AVAssetWriter assetWriterWithURL:outUrl fileType:AVFileTypeMPEG4 error:&error];
assetWriter.shouldOptimizeForNetworkUse = YES;
[assetWriter addInput:writerInput];
AVURLAsset* asset = [AVURLAsset URLAssetWithURL:inUrl options:nil];
AVAssetTrack* videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// !!! this line does not work as expected and causes all sorts of issues (videos display sideways in some cases) !!!
//writerInput.transform = videoTrack.preferredTransform;
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoOptions];
AVAssetReader* assetReader = [AVAssetReader assetReaderWithAsset:asset error:&error];
[assetReader addOutput:readerOutput];
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
[assetReader startReading];
[writerInput requestMediaDataWhenReadyOnQueue:_processingQueue usingBlock:
^{
/* snip */
}];
}
The problem is that modifying the writerInput.transform property only adds a tag in the video file metadata which instructs the video player to rotate the file during playback. That's why the videos play in the correct orientation on your device (I'm guessing they also play correctly in a Quicktime player as well).
The pixel buffers captured by the camera are still laid out in the orientation in which they were captured. Many video players will not check for the preferred orientation metadata tag and will just play the file in the native pixel orientation.
If you want the user to be able to record video holding the phone in either landscape mode, you need to rectify this at the AVCaptureSession level before compression by performing a transform on the CVPixelBuffer of each video frame. This Apple Q&A covers it (look at the AVCaptureVideoOutput documentation as well):
https://developer.apple.com/library/ios/qa/qa1744/_index.html
Investigating the link above is the correct way to solve your problem. An alternate fast n' dirty way to solve the same problem would be to lock the recording UI of your app into only one landscape orientation and then to rotate all of your videos server-side using ffmpeg.
In case it's helpful for anyone, here's the code I ended up with. I ended up having to do the work on the video as it was being captured instead of as a post processing step. This is a helper class that manages the capture.
Interface
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#interface VideoCaptureManager : NSObject<AVCaptureVideoDataOutputSampleBufferDelegate>
{
AVCaptureSession* _captureSession;
AVCaptureVideoPreviewLayer* _previewLayer;
AVCaptureVideoDataOutput* _videoOut;
AVCaptureDevice* _videoDevice;
AVCaptureDeviceInput* _videoIn;
dispatch_queue_t _videoProcessingQueue;
AVAssetWriter* _assetWriter;
AVAssetWriterInput* _writerInput;
BOOL _isCapturing;
NSString* _gameId;
NSString* _authToken;
}
-(void)setSettings:(NSString*)gameId authToken:(NSString*)authToken;
-(void)setOrientation:(AVCaptureVideoOrientation)orientation;
-(AVCaptureVideoPreviewLayer*)getPreviewLayer;
-(void)startPreview;
-(void)stopPreview;
-(void)startCapture;
-(void)stopCapture;
#end
Implementation (w/ a bit of editing and a few little TODO's)
#implementation VideoCaptureManager
-(id)init;
{
self = [super init];
if (self) {
NSError* error;
_videoProcessingQueue = dispatch_queue_create("VideoQueue", DISPATCH_QUEUE_SERIAL);
_captureSession = [AVCaptureSession new];
_videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
_previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_captureSession];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
_videoOut = [AVCaptureVideoDataOutput new];
_videoOut.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] };
_videoOut.alwaysDiscardsLateVideoFrames = YES;
_videoIn = [AVCaptureDeviceInput deviceInputWithDevice:_videoDevice error:&error];
// handle errors here
[_captureSession addInput:_videoIn];
[_captureSession addOutput:_videoOut];
}
return self;
}
-(void)setOrientation:(AVCaptureVideoOrientation)orientation;
{
_previewLayer.connection.videoOrientation = orientation;
for (AVCaptureConnection* item in _videoOut.connections) {
item.videoOrientation = orientation;
}
}
-(AVCaptureVideoPreviewLayer*)getPreviewLayer;
{
return _previewLayer;
}
-(void)startPreview;
{
[_captureSession startRunning];
}
-(void)stopPreview;
{
[_captureSession stopRunning];
}
-(void)startCapture;
{
if (_isCapturing) return;
NSURL* url = put code here to create your output url
NSDictionary* compressionSettings = #{ AVVideoProfileLevelKey: AVVideoProfileLevelH264Main31,
AVVideoAverageBitRateKey: [NSNumber numberWithInt:2500000],
AVVideoMaxKeyFrameIntervalKey: [NSNumber numberWithInt: 1],
};
NSDictionary* videoSettings = #{ AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:1280],
AVVideoHeightKey: [NSNumber numberWithInt:720],
AVVideoScalingModeKey: AVVideoScalingModeResizeAspectFill,
AVVideoCompressionPropertiesKey: compressionSettings
};
_writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
_writerInput.expectsMediaDataInRealTime = YES;
NSError* error;
_assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeMPEG4 error:&error];
// handle errors
_assetWriter.shouldOptimizeForNetworkUse = YES;
[_assetWriter addInput:_writerInput];
[_videoOut setSampleBufferDelegate:self queue:_videoProcessingQueue];
_isCapturing = YES;
}
-(void)stopCapture;
{
if (!_isCapturing) return;
[_videoOut setSampleBufferDelegate:nil queue:nil]; // TODO: seems like there could be a race condition between this line and the next (could end up trying to write a buffer after calling writingFinished
dispatch_async(_videoProcessingQueue, ^{
[_assetWriter finishWritingWithCompletionHandler:^{
[self writingFinished];
}];
});
}
-(void)writingFinished;
{
// TODO: need to check _assetWriter.status to make sure everything completed successfully
// do whatever post processing you need here
}
-(void)captureOutput:(AVCaptureOutput*)captureOutput didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection;
{
NSLog(#"Video frame was dropped.");
}
-(void)captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if(_assetWriter.status != AVAssetWriterStatusWriting) {
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
[_assetWriter startWriting]; // TODO: need to check the return value (a bool)
[_assetWriter startSessionAtSourceTime:lastSampleTime];
}
if (!_writerInput.readyForMoreMediaData || ![_writerInput appendSampleBuffer:sampleBuffer]) {
NSLog(#"Failed to write video buffer to output.");
}
}
#end
For compressing /Resizing the video ,we can use AVAssetExportSession.
We can uppload a video of duration 3.30minutes.
If the video duration will be more than 3.30minutes,it will show a memory warning .
As here we are not using any transform for the video,the video will be as it is while recording.
Below is the sample code for compressing the video .
we can check the video size before compression and after compression.
{
-(void)trimVideoWithURL:(NSURL *)inputURL{
NSString *path1 = [inputURL path];
NSData *data = [[NSFileManager defaultManager] contentsAtPath:path1];
NSLog(#"size before compress video is %lu",(unsigned long)data.length);
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:inputURL options:nil];
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPreset640x480];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *outputURL = paths[0];
NSFileManager *manager = [NSFileManager defaultManager];
[manager createDirectoryAtPath:outputURL withIntermediateDirectories:YES attributes:nil error:nil];
outputURL = [outputURL stringByAppendingPathComponent:#"output.mp4"];
fullPath = [NSURL URLWithString:outputURL];
// Remove Existing File
[manager removeItemAtPath:outputURL error:nil];
exportSession.outputURL = [NSURL fileURLWithPath:outputURL];
exportSession.shouldOptimizeForNetworkUse = YES;
exportSession.outputFileType = AVFileTypeQuickTimeMovie;
CMTime start = CMTimeMakeWithSeconds(1.0, 600);
CMTime duration = CMTimeMakeWithSeconds(1.0, 600);
CMTimeRange range = CMTimeRangeMake(start, duration);
exportSession.timeRange = range;
[exportSession exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exportSession.status) {
case AVAssetExportSessionStatusCompleted:{
NSString *path = [fullPath path];
NSData *data = [[NSFileManager defaultManager] contentsAtPath:path];
NSLog(#"size after compress video is %lu",(unsigned long)data.length);
NSLog(#"Export Complete %d %#", exportSession.status, exportSession.error);
/*
Do your neccessay stuff here after compression
*/
}
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Failed:%#",exportSession.error);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Canceled:%#",exportSession.error);
break;
default:
break;
}
}];}

Loading an image from a URL but displaying it progressively

I have a screen that will load around 5 images, but they are huge images. Right now I use a
NSURLRequest
and a:
connectionDidFinishLoading
..for callback to tell me when each image is loaded.
The problem is that images would pop up one by one. Is there a way to have it display the image while it loads?
Thanks
The guts of what you need to do this are available as CGImageSource methods.
First, you use an asynchronous NSURLConnection to get the data. You add received data to a NSMutableData object as it arrives, so the data object gets bigger and bigger til finished.
You also create a progressive image source:
CGImageSourceRef imageSourcRef = CGImageSourceCreateIncremental(dict);
You will find lots of examples here and on google how to set the dictionary required.
Then as the data arrives, you pass the TOTAL data object into this method:
CGImageSourceUpdateData(imageSourcRef, (__bridge CFDataRef)data, NO); // No means not finished
You can then ask the image source for an image, which will be partial as the image is downloading. With a CGImage you can create a UIImage.
When you get the final data, you update the image source on last time:
CGImageSourceUpdateData(imageSourcRef, (__bridge CFDataRef)data, YES);
You then use the image source to get a final image and you're done.
Displaying it while loading ,I don't think UIImageView can load UIImageswith incomplete data while loading.I will go for
AsyncImageView ,
It can deal with all the burden of loading image asynchronous.Also UIActivityIndicator is already added to it.So it will be more user friendly
Use blocks and GCD's dispatch_async method.
Look at this example:
//communityDetailViewController.h
#interface communityDetailViewController : UIViewController {
UIImageView *imgDisplay;
UIActivityIndicatorView *activity;
// the dispatch queue to load images
dispatch_queue_t queue;
}
#end
//communityDetailViewController.m
- (void)loadImage
{
[activity startAnimating];
NSString *url = #"URL the image";
if (!queue) {
queue = dispatch_queue_create("image_queue", NULL);
}
dispatch_async(queue, ^{
NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:url]];
UIImage *anImage = [UIImage imageWithData:data];
dispatch_async(dispatch_get_main_queue(), ^{
[activity stopAnimating];
activity.hidden = YES;
if (anImage != nil) {
[imgDisplay setImage:anImage];
}else{
[imgDisplay setImage:[UIImage imageNamed:#"no_image_available.png"]];
}
});
});
}
You can subclass UIImageView and use this.
-(void)connection:(NSURLConnection*)connection didReceiveResponse:(NSURLResponse*)response
{
imageData = [NSMutableData data];
imageSize = [response expectedContentLength];
imageSource = CGImageSourceCreateIncremental(NULL);
}
-(void)connection:(NSURLConnection*)connection didReceiveData:(NSData*)data
{
[imageData appendData:data];
CGImageSourceUpdateData(imageSource, (__bridge CFDataRef)imageData, ([imageData length] == imageSize) ? true : false);
CGImageRef cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
if (cgImage){
UIImage* img = [[UIImage alloc] initWithCGImage:cgImage scale:1.0f orientation:UIImageOrientationUp];
dispatch_async( dispatch_get_main_queue(), ^{
self.image = img;
});
CGImageRelease(cgImage);
}
}

Reduce lag with UITableView and GCD

I have a UITableView consisting of roughly 10 subclassed UITableViewCells named TBPostSnapCell. Each cell, when initialised, sets two of its variables with UIImages downloaded via GCD or retrieved from a cache stored in the user's documents directory.
For some reason, this is causing a noticeable lag on the tableView and therefore disrupting the UX of the app & table.
Please can you tell me how I can reduce this lag?
tableView... cellForRowAtIndexPath:
if (post.postType == TBPostTypeSnap || post.snaps != nil) {
TBPostSnapCell *snapCell = (TBPostSnapCell *) [tableView dequeueReusableCellWithIdentifier:snapID];
if (snapCell == nil) {
snapCell = [[[NSBundle mainBundle] loadNibNamed:#"TBPostSnapCell" owner:self options:nil] objectAtIndex:0];
[snapCell setPost:[posts objectAtIndex:indexPath.row]];
[snapCell.bottomImageView setImage:[UIImage imageNamed:[NSString stringWithFormat:#"%d", (indexPath.row % 6) +1]]];
}
[snapCell.commentsButton setTag:indexPath.row];
[snapCell.commentsButton addTarget:self action:#selector(comments:) forControlEvents:UIControlEventTouchDown];
[snapCell setSelectionStyle:UITableViewCellSelectionStyleNone];
return snapCell;
}
TBSnapCell.m
- (void) setPost:(TBPost *) _post {
if (post != _post) {
[post release];
post = [_post retain];
}
...
if (self.snap == nil) {
NSString *str = [[_post snaps] objectForKey:TBImageOriginalURL];
NSURL *url = [NSURL URLWithString:str];
[TBImageDownloader downloadImageAtURL:url completion:^(UIImage *image) {
[self setSnap:image];
}];
}
if (self.authorAvatar == nil) {
...
NSURL *url = [[[_post user] avatars] objectForKey:[[TBForrstr sharedForrstr] stringForPhotoSize:TBPhotoSizeSmall]];
[TBImageDownloader downloadImageAtURL:url completion:^(UIImage *image) {
[self setAuthorAvatar:image];
}];
...
}
}
TBImageDownloader.m
+ (void) downloadImageAtURL:(NSURL *)url completion:(TBImageDownloadCompletion)_block {
if ([self hasWrittenDataToFilePath:filePathForURL(url)]) {
[self imageForURL:filePathForURL(url) callback:^(UIImage * image) {
_block(image); //gets UIImage from NSDocumentsDirectory via GCD
}];
return;
}
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
UIImage *image = [UIImage imageWithData:[NSData dataWithContentsOfURL:url]];
dispatch_async(dispatch_get_main_queue(), ^{
[self writeImageData:UIImagePNGRepresentation(image) toFilePath:filePathForURL(url)];
_block(image);
});
});
}
First thing to try is converting DISPATCH_QUEUE_PRIORITY_HIGH (aka ONG MOST IMPORTANT WORK EVER FORGET EVERYTHING ELSE) to something like DISPATCH_QUEUE_PRIORITY_LOW.
If that doesn't fix it you could attempt to do the http traffic via dispatch_sources, but that is a lot of work.
You might also just try to limit the number of in flight http fetches with a semaphore, the real trick will be deciding what the best limit is as the "good" number will depend on the network, your CPUs, and memory pressure. Maybe benchmark 2, 4, and 8 with a few configurations and see if there is enough pattern to generalize.
Ok, lets try just one, replace the queue = ... with:
static dispatch_once_t once;
static dispatch_queue_t queue = NULL;
dispatch_once(&once, ^{
queue = dispatch_queue_create("com.blah.url-fetch", NULL);
});
Leave the rest of the code as is. This is likely to be the least sputtery, but may not load the images very fast.
For the more general case, rip out the change I just gave you, and we will work on this:
dispatch_async(queue, ^{
UIImage *image = [UIImage imageWithData:[NSData dataWithContentsOfURL:url]];
dispatch_async(dispatch_get_main_queue(), ^{
[self writeImageData:UIImagePNGRepresentation(image) toFilePath:filePathForURL(url)];
_block(image);
});
});
Replacing it with:
static dispatch_once_t once;
static const int max_in_flight = 2; // Also try 4, 8, and maybe some other numbers
static dispatch_semaphore_t limit = NULL;
dispatch_once(&once, ^{
limit = dispatch_semaphore_create(max_in_flight);
});
dispatch_async(queue, ^{
dispatch_semaphore_wait(limit, DISPATCH_TIME_FOREVER);
UIImage *image = [UIImage imageWithData:[NSData dataWithContentsOfURL:url]];
// (or you might want the dispatch_semaphore_signal here, and not below)
dispatch_async(dispatch_get_main_queue(), ^{
[self writeImageData:UIImagePNGRepresentation(image) toFilePath:filePathForURL(url)];
_block(image);
dispatch_semaphore_signal(limit);
});
});
NOTE: I haven't tested any of this code, even to see if it compiles. As written it will only allow 2 threads to be executing the bulk of the code in your two nested blocks. You might want to move the dispatch_semaphore_signal up to the commented line. That will limit you to two fetches/image creates, but they will be allowed to overlap with writing the image data to a file and calling your _block callback.
BTW you do a lot of file I/O which is faster on flash then any disk ever was, but if you are still looking for performance wins that might be another place to attack. For example maybe keeping the UIImage around in memory until you get a low memory warning and only then writing them to disk.