I'm working over application, that using zxing library to read QRcodes. I have problem with ZxingWidgetController - when view is showed, during application is in background/not active (eg. screen is lock) image from camera is not shown on screen - only background is visible, and scanner seems to be not working.
when i call initCapture method again, after a little delay video from camera is showed, but in this case, every time when application lose activity i need to reinitialize scanner - this behavior is not comfortable at all.
this bug can be repeated on almost all aplication used zXing, so i suppose that is some zXing bug.
zXing initCapture method code is:
- (void)initCapture {
#if HAS_AVFF
AVCaptureDeviceInput *captureInput =
[AVCaptureDeviceInput deviceInputWithDevice:
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]
error:nil];
if(!captureInput)
{
NSLog(#"ERROR - CaptureInputNotInitialized");
}
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
if(!captureOutput)
{
NSLog(#"ERROR - CaptureOutputNotInitialized");
}
[captureOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
self.captureSession = [[[AVCaptureSession alloc] init] autorelease];
self.captureSession.sessionPreset = AVCaptureSessionPresetMedium; // 480x360 on a 4
if([self.captureSession canAddInput:captureInput])
{
[self.captureSession addInput:captureInput];
}
else
{
NSLog(#"ERROR - cannot add input");
}
if([self.captureSession canAddOutput:captureOutput])
{
[self.captureSession addOutput:captureOutput];
}
else
{
NSLog(#"ERROR - cannot add output");
}
[captureOutput release];
if (!self.prevLayer)
{
[self.prevLayer release];
}
self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
// NSLog(#"prev %p %#", self.prevLayer, self.prevLayer);
self.prevLayer.frame = self.view.bounds;
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer: self.prevLayer];
[self.captureSession startRunning];
#endif
}
Maybe you guys know what is wrong?
I dont understand your question. If application is in background/not active, of course it cant working. You should make it clear.
Related
I have an iOS app with a simple UIView placed in the view controller. I am trying to show the camera feed of the front facing camera, in the UIView. I am not trying to take a picture or record a video, I simply want to show the live feed in a UIView.
I have tried to implement AVCaptureVideoPreviewLayer, however the feed I get is blank. Nothing seems to happen. Here is my code:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input;
#try {
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
} #catch (NSException *exception) {
NSLog(#"Error; %#", error);
} #finally {
if (error == nil) {
if ([session canAddInput:input]) {
[session addInput:input];
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[stillImageOutput setOutputSettings:#{AVVideoCodecKey : AVVideoCodecJPEG}];
if ([session canAddOutput:stillImageOutput]) {
[session setSessionPreset:AVCaptureSessionPresetHigh];
[session addOutput:stillImageOutput];
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[previewLayer.connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
[backgroundStreamView.layer addSublayer:previewLayer];
[session startRunning];
NSLog(#"session running");
} else {
NSLog(#"cannot add output");
}
} else {
NSLog(#"cannot add inout");
}
} else {
NSLog(#"general error: %#", error);
}
}
The session runs perfectly fine, however no video feed is shown. What am I doing wrong?
Managed to fix it myself, turned out to be a fairly simple issue - I didn't specify the frame size of the AVCapturePreviewLayer and as a result it was not appearing (presumably because it defaults to a frame size of zero).
To fix this I set the frame to match the frame of my custom UIView:
[previewLayer setFrame:backgroundStreamView.bounds];
Deprecation code fix
AVCaptureStillImageOutput is also deprecated, so to fix that, I replaced it with the AVCapturePhotoOutput class. Thus the code changed from:
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[stillImageOutput setOutputSettings:#{AVVideoCodecKey : AVVideoCodecJPEG}]
to the following:
AVCapturePhotoOutput *stillImageOutput = [[AVCapturePhotoOutput alloc] init];
I am developing an iPhone App. In that, there is a requirement for Pausing and resuming the camera. So i used AVFoundation for that instead of using UIImagePickerController.
My code is :
- (void) startup :(BOOL)isFrontCamera
{
if (_session == nil)
{
NSLog(#"Starting up server");
self.isCapturing = NO;
self.isPaused = NO;
_currentFile = 0;
_discont = NO;
// create capture device with video input
_session = [[AVCaptureSession alloc] init];
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(isFrontCamera)
{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices)
{
if (device.position == AVCaptureDevicePositionFront)
{
captureDevice = device;
break;
}
}
cameraDevice = captureDevice;
}
cameraDevice=[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:nil];
[_session addInput:input];
// audio input from default mic
AVCaptureDevice* mic = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* micinput = [AVCaptureDeviceInput deviceInputWithDevice:mic error:nil];
[_session addInput:micinput];
// create an output for YUV output with self as delegate
_captureQueue = dispatch_queue_create("uk.co.gdcl.cameraengine.capture", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput* videoout = [[AVCaptureVideoDataOutput alloc] init];
[videoout setSampleBufferDelegate:self queue:_captureQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
videoout.videoSettings = setcapSettings;
[_session addOutput:videoout];
_videoConnection = [videoout connectionWithMediaType:AVMediaTypeVideo];
[_videoConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
NSDictionary* actual = videoout.videoSettings;
_cy = [[actual objectForKey:#"Width"] integerValue];
_cx = [[actual objectForKey:#"Height"] integerValue];
AVCaptureAudioDataOutput* audioout = [[AVCaptureAudioDataOutput alloc] init];
[audioout setSampleBufferDelegate:self queue:_captureQueue];
[_session addOutput:audioout];
_audioConnection = [audioout connectionWithMediaType:AVMediaTypeAudio];
[_session startRunning];
_preview = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
}
}
Here i am facing the problem when i change the camera to Front. when i calling the above method by changing the camera to front, the preview layer is getting stuck and no preview is coming. My doubt is "Can we change the capture device in the middle of capture session ?". Please guide me where i am going wrong (or) Suggest me with a solution on how to navigate between front and back camera while recording.
Thanks in Advance.
Yes, you can. There are just a few of things you need to cater to.
Need to be using AVCaptureVideoDataOutput and its delegate for recording.
Make sure you remove the previous deviceInput before adding the new deviceInput.
You must remove and recreate the AVCaptureVideoDataOutput as well.
I am using these two functions for it right now and it works while the session is running.
- (void)configureVideoWithDevice:(AVCaptureDevice *)camera {
[_session beginConfiguration];
[_session removeInput:_videoInputDevice];
_videoInputDevice = nil;
_videoInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:camera error:nil];
if ([_session canAddInput:_videoInputDevice]) {
[_session addInput:_videoInputDevice];
}
[_session removeOutput:_videoDataOutput];
_videoDataOutput = nil;
_videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[_videoDataOutput setSampleBufferDelegate:self queue:_outputQueueVideo];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey, nil];
_videoDataOutput.videoSettings = setcapSettings;
[_session addOutput:_videoDataOutput];
_videoConnection = [_videoDataOutput connectionWithMediaType:AVMediaTypeVideo];
if([_videoConnection isVideoOrientationSupported]) {
[_videoConnection setVideoOrientation:AVCaptureVideoOrientationLandscapeRight];
}
[_session commitConfiguration];
}
- (void)configureAudioWithDevice:(AVCaptureDevice *)microphone {
[_session beginConfiguration];
_audioInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:microphone error:nil];
if ([_session canAddInput:_audioInputDevice]) {
[_session addInput:_audioInputDevice];
}
[_session removeOutput:_audioDataOutput];
_audioDataOutput = nil;
_audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
[_audioDataOutput setSampleBufferDelegate:self queue:_outputQueueAudio];
[_session addOutput:_audioDataOutput];
_audioConnection = [_audioDataOutput connectionWithMediaType:AVMediaTypeAudio];
[_session commitConfiguration];
}
You can't change the captureDevice mid-session. And you can only have one capture session running at a time. You could end the current session and create a new one. There will be a slight lag (maybe a second or two depending on your cpu load).
I wish Apple would allow multiple sessions or at least multiple devices per session... but they do not... yet.
have you considered having multiple sessions and then afterwards processing the video files to join them together into one?
I am using AVCaptureSession to capture video from a devices camera and then using AVAssetWriterInput and AVAssetTrack to compress/resize the video before uploading it to a server. The final videos will be viewed on the web via an html5 video element.
I'm running into multiple issues trying to get the orientation of the video correct. My app only supports landscape orientation and all captured videos should be in landscape orientation. However, I would like to allow the user to hold their device in either landscape direction (i.e. home button on either the left or the right hand side).
I am able to make the video preview show in the correct orientation with the following line of code
_previewLayer.connection.videoOrientation = UIDevice.currentDevice.orientation;
The problems start when processing the video via AVAssetWriterInput and friends. The result does not seem to account for the left vs. right landscape mode the video was captured in. IOW, sometimes the video comes out upside down. After some googling I found many people suggesting that the following line of code would solve this issue
writerInput.transform = videoTrack.preferredTransform;
...but this doesn't seem to work. After a bit of debugging I found that videoTrack.preferredTransform is always the same value, regardless of the orientation the video was captured in.
I tried manually tracking what orientation the video was captured in and setting the writerInput.transform to CGAffineTransformMakeRotation(M_PI) as needed. Which solved the problem!!!
...sorta
When I viewed the results on the device this solution worked as expected. Videos were right-side-up regardless of left vs. right orientation while recording. Unfortunately, when I viewed the exact same videos in another browser (chrome on a mac book) they were all upside-down!?!?!?
What am I doing wrong?
EDIT
Here's some code, in case it's helpful...
-(void)compressFile:(NSURL*)inUrl;
{
NSString* fileName = [#"compressed." stringByAppendingString:inUrl.lastPathComponent];
NSError* error;
NSURL* outUrl = [PlatformHelper getFilePath:fileName error:&error];
NSDictionary* compressionSettings = #{ AVVideoProfileLevelKey: AVVideoProfileLevelH264Main31,
AVVideoAverageBitRateKey: [NSNumber numberWithInt:2500000],
AVVideoMaxKeyFrameIntervalKey: [NSNumber numberWithInt: 30] };
NSDictionary* videoSettings = #{ AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:1280],
AVVideoHeightKey: [NSNumber numberWithInt:720],
AVVideoScalingModeKey: AVVideoScalingModeResizeAspectFill,
AVVideoCompressionPropertiesKey: compressionSettings };
NSDictionary* videoOptions = #{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] };
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
writerInput.expectsMediaDataInRealTime = YES;
AVAssetWriter* assetWriter = [AVAssetWriter assetWriterWithURL:outUrl fileType:AVFileTypeMPEG4 error:&error];
assetWriter.shouldOptimizeForNetworkUse = YES;
[assetWriter addInput:writerInput];
AVURLAsset* asset = [AVURLAsset URLAssetWithURL:inUrl options:nil];
AVAssetTrack* videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// !!! this line does not work as expected and causes all sorts of issues (videos display sideways in some cases) !!!
//writerInput.transform = videoTrack.preferredTransform;
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoOptions];
AVAssetReader* assetReader = [AVAssetReader assetReaderWithAsset:asset error:&error];
[assetReader addOutput:readerOutput];
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
[assetReader startReading];
[writerInput requestMediaDataWhenReadyOnQueue:_processingQueue usingBlock:
^{
/* snip */
}];
}
The problem is that modifying the writerInput.transform property only adds a tag in the video file metadata which instructs the video player to rotate the file during playback. That's why the videos play in the correct orientation on your device (I'm guessing they also play correctly in a Quicktime player as well).
The pixel buffers captured by the camera are still laid out in the orientation in which they were captured. Many video players will not check for the preferred orientation metadata tag and will just play the file in the native pixel orientation.
If you want the user to be able to record video holding the phone in either landscape mode, you need to rectify this at the AVCaptureSession level before compression by performing a transform on the CVPixelBuffer of each video frame. This Apple Q&A covers it (look at the AVCaptureVideoOutput documentation as well):
https://developer.apple.com/library/ios/qa/qa1744/_index.html
Investigating the link above is the correct way to solve your problem. An alternate fast n' dirty way to solve the same problem would be to lock the recording UI of your app into only one landscape orientation and then to rotate all of your videos server-side using ffmpeg.
In case it's helpful for anyone, here's the code I ended up with. I ended up having to do the work on the video as it was being captured instead of as a post processing step. This is a helper class that manages the capture.
Interface
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#interface VideoCaptureManager : NSObject<AVCaptureVideoDataOutputSampleBufferDelegate>
{
AVCaptureSession* _captureSession;
AVCaptureVideoPreviewLayer* _previewLayer;
AVCaptureVideoDataOutput* _videoOut;
AVCaptureDevice* _videoDevice;
AVCaptureDeviceInput* _videoIn;
dispatch_queue_t _videoProcessingQueue;
AVAssetWriter* _assetWriter;
AVAssetWriterInput* _writerInput;
BOOL _isCapturing;
NSString* _gameId;
NSString* _authToken;
}
-(void)setSettings:(NSString*)gameId authToken:(NSString*)authToken;
-(void)setOrientation:(AVCaptureVideoOrientation)orientation;
-(AVCaptureVideoPreviewLayer*)getPreviewLayer;
-(void)startPreview;
-(void)stopPreview;
-(void)startCapture;
-(void)stopCapture;
#end
Implementation (w/ a bit of editing and a few little TODO's)
#implementation VideoCaptureManager
-(id)init;
{
self = [super init];
if (self) {
NSError* error;
_videoProcessingQueue = dispatch_queue_create("VideoQueue", DISPATCH_QUEUE_SERIAL);
_captureSession = [AVCaptureSession new];
_videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
_previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_captureSession];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
_videoOut = [AVCaptureVideoDataOutput new];
_videoOut.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] };
_videoOut.alwaysDiscardsLateVideoFrames = YES;
_videoIn = [AVCaptureDeviceInput deviceInputWithDevice:_videoDevice error:&error];
// handle errors here
[_captureSession addInput:_videoIn];
[_captureSession addOutput:_videoOut];
}
return self;
}
-(void)setOrientation:(AVCaptureVideoOrientation)orientation;
{
_previewLayer.connection.videoOrientation = orientation;
for (AVCaptureConnection* item in _videoOut.connections) {
item.videoOrientation = orientation;
}
}
-(AVCaptureVideoPreviewLayer*)getPreviewLayer;
{
return _previewLayer;
}
-(void)startPreview;
{
[_captureSession startRunning];
}
-(void)stopPreview;
{
[_captureSession stopRunning];
}
-(void)startCapture;
{
if (_isCapturing) return;
NSURL* url = put code here to create your output url
NSDictionary* compressionSettings = #{ AVVideoProfileLevelKey: AVVideoProfileLevelH264Main31,
AVVideoAverageBitRateKey: [NSNumber numberWithInt:2500000],
AVVideoMaxKeyFrameIntervalKey: [NSNumber numberWithInt: 1],
};
NSDictionary* videoSettings = #{ AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:1280],
AVVideoHeightKey: [NSNumber numberWithInt:720],
AVVideoScalingModeKey: AVVideoScalingModeResizeAspectFill,
AVVideoCompressionPropertiesKey: compressionSettings
};
_writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
_writerInput.expectsMediaDataInRealTime = YES;
NSError* error;
_assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeMPEG4 error:&error];
// handle errors
_assetWriter.shouldOptimizeForNetworkUse = YES;
[_assetWriter addInput:_writerInput];
[_videoOut setSampleBufferDelegate:self queue:_videoProcessingQueue];
_isCapturing = YES;
}
-(void)stopCapture;
{
if (!_isCapturing) return;
[_videoOut setSampleBufferDelegate:nil queue:nil]; // TODO: seems like there could be a race condition between this line and the next (could end up trying to write a buffer after calling writingFinished
dispatch_async(_videoProcessingQueue, ^{
[_assetWriter finishWritingWithCompletionHandler:^{
[self writingFinished];
}];
});
}
-(void)writingFinished;
{
// TODO: need to check _assetWriter.status to make sure everything completed successfully
// do whatever post processing you need here
}
-(void)captureOutput:(AVCaptureOutput*)captureOutput didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection;
{
NSLog(#"Video frame was dropped.");
}
-(void)captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if(_assetWriter.status != AVAssetWriterStatusWriting) {
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
[_assetWriter startWriting]; // TODO: need to check the return value (a bool)
[_assetWriter startSessionAtSourceTime:lastSampleTime];
}
if (!_writerInput.readyForMoreMediaData || ![_writerInput appendSampleBuffer:sampleBuffer]) {
NSLog(#"Failed to write video buffer to output.");
}
}
#end
For compressing /Resizing the video ,we can use AVAssetExportSession.
We can uppload a video of duration 3.30minutes.
If the video duration will be more than 3.30minutes,it will show a memory warning .
As here we are not using any transform for the video,the video will be as it is while recording.
Below is the sample code for compressing the video .
we can check the video size before compression and after compression.
{
-(void)trimVideoWithURL:(NSURL *)inputURL{
NSString *path1 = [inputURL path];
NSData *data = [[NSFileManager defaultManager] contentsAtPath:path1];
NSLog(#"size before compress video is %lu",(unsigned long)data.length);
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:inputURL options:nil];
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPreset640x480];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *outputURL = paths[0];
NSFileManager *manager = [NSFileManager defaultManager];
[manager createDirectoryAtPath:outputURL withIntermediateDirectories:YES attributes:nil error:nil];
outputURL = [outputURL stringByAppendingPathComponent:#"output.mp4"];
fullPath = [NSURL URLWithString:outputURL];
// Remove Existing File
[manager removeItemAtPath:outputURL error:nil];
exportSession.outputURL = [NSURL fileURLWithPath:outputURL];
exportSession.shouldOptimizeForNetworkUse = YES;
exportSession.outputFileType = AVFileTypeQuickTimeMovie;
CMTime start = CMTimeMakeWithSeconds(1.0, 600);
CMTime duration = CMTimeMakeWithSeconds(1.0, 600);
CMTimeRange range = CMTimeRangeMake(start, duration);
exportSession.timeRange = range;
[exportSession exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exportSession.status) {
case AVAssetExportSessionStatusCompleted:{
NSString *path = [fullPath path];
NSData *data = [[NSFileManager defaultManager] contentsAtPath:path];
NSLog(#"size after compress video is %lu",(unsigned long)data.length);
NSLog(#"Export Complete %d %#", exportSession.status, exportSession.error);
/*
Do your neccessay stuff here after compression
*/
}
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Failed:%#",exportSession.error);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Canceled:%#",exportSession.error);
break;
default:
break;
}
}];}
I spent my whole 2weeks for just trying to resolve this problem. So frustrate!
Following 2 functions are what I'm using for fetching a image from device library.
If I use "setImage" function multiple times I keep losing my free memory on my iOS Device.
I think "[imageFromAsset initWithCGImage:[[myasset defaultRepresentation] fullScreenImage]];" in assetImage function causes the problem.
Can any guys help me? Any clues or thinking would be SUPER appreciate! Please!
- (void)setImage:(NSURL *)imageURL{
UIImage *imageFromDeviceLibrary = [[UIImage alloc] init];
[[DevicePhotoControl sharedInstance] assetImage:[imageURL absoluteString] imageToStore:imageFromDeviceLibrary];
UIImageView *fullImageView = [[UIImageView alloc] initWithImage:imageFromDeviceLibrary];
[imageFromDeviceLibrary release];
[self.view addSubview:fullImageView];
[fullImageView release];
}
- (void)assetImage:(NSString *)assetURL imageToStore:(UIImage *)imageFromAsset{
// Handling exception case for when it doesn't have assets-library
if ([assetURL hasPrefix:#"assets-library:"] == NO) {
assetURL = [NSString stringWithFormat:#"%#%#",#"assets-library:",assetURL];
}
__block BOOL busy = YES;
ALAssetsLibrary* assetslibrary = [[[ALAssetsLibrary alloc] init] autorelease];
//get image data by URL
ALAssetsLibraryAssetForURLResultBlock resultblock = ^(ALAsset *myasset)
{
[imageFromAsset initWithCGImage:[[myasset defaultRepresentation] fullScreenImage]];
busy = NO;
};
ALAssetsLibraryAccessFailureBlock failureblock = ^(NSError *myerror)
{
NSLog(#"Library Image Fetching Failed : %#",[myerror localizedDescription]);
busy = NO;
};
[assetslibrary assetForURL:[NSURL URLWithString:assetURL]
resultBlock:resultblock
failureBlock:failureblock];
while (busy == YES) {
[[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]];
}
}
The moment the AssetLibrary is released, all the asset object will be gone with it.
I suggest that you create your assetLibrary in the app delegate to keep it alive and only reset it when you receive change notification ALAssetsLibraryChangedNotification from the ALAssetLibrary
here
It may help you.
I am new to IOS development. At the moment I'm building an iPad app, what gets data back from the webservice, saves it in a core database and shows it in a tableview using NSFetchedResultController.
What is the problem. In IOS6 it al works like a charm, in IOS 5.0 it saves the data but not showing it in the tableview. It gives the following error.
CoreData: error: Serious application error. An exception was caught from the delegate of NSFetchedResultsController during a call to -controllerDidChangeContent:. The NIB data is invalid. with userInfo (null)
And when I test in IOS 5.1 it does not do anything. It does not shows data in a tableview and data is not being stored in core database.
Here is what I do. In my view did appear I check if there is a document I can use.
if (!self.genkDatabase) { // we'll create a default database if none is set
NSURL *url = [[[NSFileManager defaultManager] URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask] lastObject];
url = [url URLByAppendingPathComponent:#"Default appGenk Database"];
self.genkDatabase = [[UIManagedDocument alloc] initWithFileURL:url]; // setter will create this for us on disk
}
Then it goes to my setter. And next in usedocument gonna check in which state my document is.
- (void)setGenkDatabase:(UIManagedDocument *)genkDatabase
{
if (_genkDatabase != genkDatabase) {
_genkDatabase = genkDatabase;
[self useDocument];
}
NSLog(#"Comes in the setdatabase methode.");
}
- (void)useDocument
{
NSLog(#"Comses in the use document");
if (![[NSFileManager defaultManager] fileExistsAtPath:[self.genkDatabase.fileURL path]]) {
// does not exist on disk, so create it
[self.genkDatabase saveToURL:self.genkDatabase.fileURL forSaveOperation:UIDocumentSaveForCreating completionHandler:^(BOOL success) {
NSLog(#"create");
[self setupFetchedResultsController];
NSLog(#"create 2");
[self fetchNewsIntoDocument:self.genkDatabase];
NSLog(#"create 3");
}];
} else if (self.genkDatabase.documentState == UIDocumentStateClosed) {
NSLog(#"closed news");
// exists on disk, but we need to open it
[self.genkDatabase openWithCompletionHandler:^(BOOL success) {
[self setupFetchedResultsController];
}];
} else if (self.genkDatabase.documentState == UIDocumentStateNormal) {
NSLog(#"normal");
// already open and ready to use
[self setupFetchedResultsController];
}
}
Finally I fetch my data in my document using the following function.
- (void)fetchNewsIntoDocument:(UIManagedDocument *)document
{
dispatch_queue_t fetchNews = dispatch_queue_create("news fetcher", NULL);
dispatch_async(fetchNews, ^{
NSArray *news = [GenkData getNews];
[document.managedObjectContext performBlock:^{
int newsId = 0;
for (NSDictionary *genkInfo in news) {
newsId++;
[News newsWithGenkInfo:genkInfo inManagedObjectContext:document.managedObjectContext withNewsId:newsId];
NSLog(#"news inserted");
}
[document saveToURL:document.fileURL forSaveOperation:UIDocumentSaveForOverwriting completionHandler:NULL];
}];
});
dispatch_release(fetchNews);
}
For the problem in IOS 5.0 is this my NSFetchResultController method.
- (void)setupFetchedResultsController // attaches an NSFetchRequest to this UITableViewController
{
NSFetchRequest *request = [NSFetchRequest fetchRequestWithEntityName:#"News"];
request.sortDescriptors = [NSArray arrayWithObjects:[NSSortDescriptor sortDescriptorWithKey:#"date" ascending:YES],nil];
self.fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request
managedObjectContext:self.genkDatabase.managedObjectContext
sectionNameKeyPath:nil
cacheName:nil];
}
I am doing this like the example from Paul Hegarty in stanford university. You can find the example Photomania over here.
Hope anyone can help me!
Kind regards