I think this is a really bad way to check user access to the resources.
Is there better way to get user access?
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
// create library
ALAssetsLibraryAssetForURLResultBlock resultblock = ^(ALAsset *myasset)
{
accessGiven = YES;
return ;
};
void (^assetGroupEnumberatorFailure)(NSError *) = ^(NSError *error) {
accessGiven = NO;
return;
};
// create 2 blocks
NSURL *url = [[NSURL alloc] initWithString:#" "];
[library assetForURL:url resultBlock:resultblock failureBlock:assetGroupEnumberatorFailure];
// use empty url to check acces..
[library release];
iOS 6.0 has better way to do that.
[ALAssetsLibrary authorizationStatus];
Method Description
Authorization Status Enum
Related
I am trying to create a TTS to file in Objc.
Since iOS13 can write it to a file.
But I'm stuck with writeUtterance:toBufferCallback.
Do someone has an exemple with this function in objc?
[synth speakUtterance:utterance];
Referring to the potential answer in Swift, this would be the Objective-C implementation
AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:#"test 123"];
AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:#"en-US"];
[utterance setVoice:voice];
__block AVAudioFile *output = nil;
[synthesizer writeUtterance:utterance
toBufferCallback:^(AVAudioBuffer * _Nonnull buffer) {
AVAudioPCMBuffer *pcmBuffer = (AVAudioPCMBuffer*)buffer;
if (!pcmBuffer) {
NSLog(#"Error");
return;
}
if (pcmBuffer.frameLength != 0) {
//append buffer to file
if (output == nil) {
output = [[AVAudioFile alloc] initForWriting:[NSURL fileURLWithPath:#"test.caf"]
settings:pcmBuffer.format.settings
commonFormat:AVAudioPCMFormatInt16
interleaved:NO error:nil];
}
[output writeFromBuffer:pcmBuffer error:nil];
}
}];
Inside of uiwebview, what is a good way print a pdf document?
The pdf is accessible via a url or it can be loaded inside of an iframe.
Using the standard javascript widnow.print() functions will not work.
I am considering using a javascript bridge such as:
- (BOOL)webView:(UIWebView*)webView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType {
NSURL *URL = [request URL];
if ([[URL scheme] isEqualToString:#"native"]) {
NSString *urlString = [[request URL] absoluteString];
NSArray *urlParts = [urlString componentsSeparatedByString:#":"];
NSString *cmd = [urlParts objectAtIndex:1];
if ( [cmd isEqualToString:#"printPdf"] ) {
// [self dosomething];
}
}
return YES;
}
At this point I need some sort of xcode function which accept a path to the pdf and send it the airPrinter.
Is this a good approach? I am searching examples of how to print a pdf inside a uiWebView.
As I've earned the tumbleweed badge for no up votes and no responses, I'll post my solution.
This fetches the pdf doc and opens the airPrint dialog--all within a uiWebView.
So if IOS would simply allow the javascript window.print() to function inside of uiWebView, my app would not be setting in the app store waiting for approval and re-release.
Anyway, here's a working solution:
- (void)printInit:(NSString *)parm {
UIPrintInteractionController *controller = [UIPrintInteractionController sharedPrintController];
if(!controller){
NSLog(#"Couldn't get shared UIPrintInteractionController!");
return;
}
NSString *base = #"https://someurl.com/";
NSString *ustr = [base stringByAppendingString:parm];
//NSURL *url = [NSURL fileURLWithPath:ustr];
NSURL *url = [NSURL URLWithString:ustr];
NSData *thePdf = [NSData dataWithContentsOfURL:url];
controller.printingItem = thePdf;
UIPrintInfo *printInfo = [UIPrintInfo printInfo];
printInfo.outputType = UIPrintInfoOutputGeneral;
printInfo.jobName = #"PDFDoc";
controller.printInfo = printInfo;
void (^completionHandler)(UIPrintInteractionController *, BOOL, NSError *) =
^(UIPrintInteractionController *printController, BOOL completed, NSError *error) {
if (!completed && error) {
NSLog(#"FAILED! error = %#",[error localizedDescription]);
}
};
CGRect rect = CGRectMake(310, 5, 100, 5);
[controller presentFromRect:rect inView:self.webView animated:YES completionHandler:completionHandler];
}
I am developing an iPhone App. In that, there is a requirement for Pausing and resuming the camera. So i used AVFoundation for that instead of using UIImagePickerController.
My code is :
- (void) startup :(BOOL)isFrontCamera
{
if (_session == nil)
{
NSLog(#"Starting up server");
self.isCapturing = NO;
self.isPaused = NO;
_currentFile = 0;
_discont = NO;
// create capture device with video input
_session = [[AVCaptureSession alloc] init];
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(isFrontCamera)
{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices)
{
if (device.position == AVCaptureDevicePositionFront)
{
captureDevice = device;
break;
}
}
cameraDevice = captureDevice;
}
cameraDevice=[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:nil];
[_session addInput:input];
// audio input from default mic
AVCaptureDevice* mic = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* micinput = [AVCaptureDeviceInput deviceInputWithDevice:mic error:nil];
[_session addInput:micinput];
// create an output for YUV output with self as delegate
_captureQueue = dispatch_queue_create("uk.co.gdcl.cameraengine.capture", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput* videoout = [[AVCaptureVideoDataOutput alloc] init];
[videoout setSampleBufferDelegate:self queue:_captureQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
videoout.videoSettings = setcapSettings;
[_session addOutput:videoout];
_videoConnection = [videoout connectionWithMediaType:AVMediaTypeVideo];
[_videoConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
NSDictionary* actual = videoout.videoSettings;
_cy = [[actual objectForKey:#"Width"] integerValue];
_cx = [[actual objectForKey:#"Height"] integerValue];
AVCaptureAudioDataOutput* audioout = [[AVCaptureAudioDataOutput alloc] init];
[audioout setSampleBufferDelegate:self queue:_captureQueue];
[_session addOutput:audioout];
_audioConnection = [audioout connectionWithMediaType:AVMediaTypeAudio];
[_session startRunning];
_preview = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
}
}
Here i am facing the problem when i change the camera to Front. when i calling the above method by changing the camera to front, the preview layer is getting stuck and no preview is coming. My doubt is "Can we change the capture device in the middle of capture session ?". Please guide me where i am going wrong (or) Suggest me with a solution on how to navigate between front and back camera while recording.
Thanks in Advance.
Yes, you can. There are just a few of things you need to cater to.
Need to be using AVCaptureVideoDataOutput and its delegate for recording.
Make sure you remove the previous deviceInput before adding the new deviceInput.
You must remove and recreate the AVCaptureVideoDataOutput as well.
I am using these two functions for it right now and it works while the session is running.
- (void)configureVideoWithDevice:(AVCaptureDevice *)camera {
[_session beginConfiguration];
[_session removeInput:_videoInputDevice];
_videoInputDevice = nil;
_videoInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:camera error:nil];
if ([_session canAddInput:_videoInputDevice]) {
[_session addInput:_videoInputDevice];
}
[_session removeOutput:_videoDataOutput];
_videoDataOutput = nil;
_videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[_videoDataOutput setSampleBufferDelegate:self queue:_outputQueueVideo];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey, nil];
_videoDataOutput.videoSettings = setcapSettings;
[_session addOutput:_videoDataOutput];
_videoConnection = [_videoDataOutput connectionWithMediaType:AVMediaTypeVideo];
if([_videoConnection isVideoOrientationSupported]) {
[_videoConnection setVideoOrientation:AVCaptureVideoOrientationLandscapeRight];
}
[_session commitConfiguration];
}
- (void)configureAudioWithDevice:(AVCaptureDevice *)microphone {
[_session beginConfiguration];
_audioInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:microphone error:nil];
if ([_session canAddInput:_audioInputDevice]) {
[_session addInput:_audioInputDevice];
}
[_session removeOutput:_audioDataOutput];
_audioDataOutput = nil;
_audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
[_audioDataOutput setSampleBufferDelegate:self queue:_outputQueueAudio];
[_session addOutput:_audioDataOutput];
_audioConnection = [_audioDataOutput connectionWithMediaType:AVMediaTypeAudio];
[_session commitConfiguration];
}
You can't change the captureDevice mid-session. And you can only have one capture session running at a time. You could end the current session and create a new one. There will be a slight lag (maybe a second or two depending on your cpu load).
I wish Apple would allow multiple sessions or at least multiple devices per session... but they do not... yet.
have you considered having multiple sessions and then afterwards processing the video files to join them together into one?
For a list of URLs I need to load the photos with ALAssetsLibrary:assetForURL, and this within one method.
Since this method works async but it is not iterating over the passed list of URLs, as we all know.
I found this snippet (which should work):
- (void)loadImages:(NSArray *)imageUrls loadedImages:(NSArray *)loadedImages callback: (void(^)(NSArray *))callback
{
if (imageUrls == nil || [imageUrls count] == 0) {
callback(loadedImages);
}
else {
NSURL *head = [imageUrls head];
__unsafe_unretained id unretained_self = self;
ALAssetsLibrary* library = [[ALAssetsLibrary alloc] init];
[library assetForURL:head resultBlock:^(ALAsset *asset) {
ALAssetRepresentation *assetRepresentation = asset.defaultRepresentation;
UIImage *image = [UIImage imageWithCGImage:assetRepresentation.fullResolutionImage scale:assetRepresentation.scale orientation:(UIImageOrientation)assetRepresentation.orientation];
[unretained_self loadImages:[imageUrls tail] loadedImages:[loadedImages arrayByAddingObject:image] callback:callback];
} failureBlock:^(NSError *error) {
[unretained_self loadImages:[imageUrls tail] loadedImages:loadedImages callback:callback];
}];
}
}
How do I write the method definition in the form (above all the callback)
void loadImages(NSArray *imageUrls, NSArray *loadedImages, ...) ?
How do I call this method from another method (again mainly the callback part) ?
Can the callback be in the calling method or a 3rd method needed for this? and how does this method need to be written?
I have found the snippet here: http://www.calebmadrigal.com/functional-programming-deal-asynchronicity-objective-c/
Use NSThread to call the loadImages method.
NSMutableArray *imageCollection = [NSThread detachNewThreadSelector:#selector (loadImages:)
toTarget:self
withObject:imageUrlsCollection];
- (NSMutableArray *)loadImages:(NSArray *)imageUrls
{
ALAssetsLibrary* library = [[ALAssetsLibrary alloc] init];
NSMutableArray *loadedImages = [[NSMutableArray alloc] init];
#try
{
for(int index = 0; index < [imageUrls count]; index++)
{
NSURL *url = [imageUrls objectAtIndex:index];
[library assetForURL:url resultBlock:^(ALAsset *asset) {
ALAssetRepresentation *assetRepresentation = asset.defaultRepresentation;
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *image = [UIImage imageWithCGImage:assetRepresentation.fullResolutionImage scale:assetRepresentation.scale orientation:(UIImageOrientation)assetRepresentation.orientation];
[loadedImages addObject:image];
});
} failureBlock:^(NSError *error) {
NSLog(#"Failed to get Image");
}];
}
}
#catch (NSException *exception)
{
NSLog(#"%s\n exception: Name- %# Reason->%#", __PRETTY_FUNCTION__,[exception name],[exception reason]);
}
#finally
{
return loadedImages;
}
}
Note: With ARC,take care about invalid attempt to access ALAssetPrivate past the lifetime of its owning ALAssetsLibrary issue
Here is the fix :)
I am trying to downloading more then 600 images in loop with a progress meter on the top of the screen to the user. I blocked my screen with a fade layer for showing activity and progress.
I am getting the memory warning message in between and app getting crashes.
My steps to reach the loop are :
On app delegate, I check first core data table for all rows which is having "0" value in isImageAvailable bool field.
If shows me some count (say 600), and I show and alert with YES and NO option.
On YES : [self performSelector:#selector(myDownload:) withObject:nil afterDelay:0.2];
in myDownload
NSOperationQueue *queue = [NSOperationQueue new];
// Create our NSInvocationOperation to call loadDataWithOperation, passing in nil
NSInvocationOperation *operation = [[NSInvocationOperation alloc] initWithTarget:self
selector:#selector(startUpdatingRecords:) object:nil];
// Add the operation to the queue
[queue addOperation:operation];
[operation release];
[queue release];
in startUpdatingRecords :
-(void)startUpdatingRecords:(id)sender
{
[self performSelectorInBackground:#selector(updateProgressMeter:) withObject: [NSString stringWithFormat:#"%d",self.loopStartIndex]];
// Variable declarations
CGSize newSizeLarge ;
NSPredicate *predicate;
NSMutableArray *MatchingID;
Image_DB *data;
// Cache Directory path
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES);
NSData *responseData; // = [[NSData alloc]init] ;
NSURL *url = [[[NSURL alloc]init] autorelease];
NSMutableURLRequest *request = [[[NSMutableURLRequest alloc]init] autorelease];
UIImage *imgSelected_Large = [[[UIImage alloc]init] autorelease];
// Loop through all IDs
for (int i = 0; i < [self.arrayOfID count]; i++) //for (int i = loopStart; i < loopEnd; i++)
{
if (self.abortDownload)
{
break;
}
NSString *documentsDirectory = [[[NSString alloc] initWithFormat:#"%#",[paths objectAtIndex:0]] autorelease];
documentsDirectory = [paths objectAtIndex:0];
documentsDirectory = [documentsDirectory stringByAppendingFormat:#"/ImageFolder"]; // Image folder path
myClass *classObj = [self.arrayOfID objectAtIndex:i];
NSString *strURl = [[[NSString alloc] initWithFormat:#"%#%#", self.MyURL,recipeObj.recipeImageStr] autorelease];
//NSLog(#"URL = %#",strURl);
url = [NSURL URLWithString:strURl];
request = [NSMutableURLRequest requestWithURL:url];
responseData = [NSURLConnection sendSynchronousRequest:request returningResponse:NULL error:NULL]; // Get Image Data into NSData
//imgSelected_Large = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:strURl]]];
NSLog(#"Download Count = %d",i+1);
if (responseData != nil)
{
imgSelected_Large = [UIImage imageWithData:responseData];
// Resizining image
newSizeLarge.width = 320;
newSizeLarge.height = 180;
imgSelected_Large = [self imageWithImage:imgSelected_Large scaledToSize:newSizeLarge]; // New sized image
NSData *dataPhoto; // no need to release it because UIImageJPEGRepresentation gives autoreleased NSData obj.
dataPhoto = UIImageJPEGRepresentation(imgSelected_Large, 0.6); // Set new image representation and its Compression Quality
documentsDirectory = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"Image_%d", classObj.nodeID]];
[dataPhoto writeToFile:documentsDirectory atomically:YES]; //Write file to local folder at default path
predicate = [NSPredicate predicateWithFormat: #"(image_ID = %d )",recipeObj.nodeID];
MatchingID = [CoreDataAPIMethods searchObjectsInContext:#"Image_DB" :predicate :#"image_ID" :YES :self.managedObjectContext];
// Setting flag variable for available image
for (int j = 0; j< [MatchingID count]; j++)
{
//Assign the Authors Records in Class Object and save to Database
data = (Image_DB*) [MatchingID objectAtIndex:j];
// data.image_large = dataPhoto; // Code for storing BLOB object to DB
data.extra_1 = #"1";
//NSLog(#"Flag updated");
}
}
// Exit out code
if ( i == [self.arrayOfID count] - 1 || i == [self.arrayOfID count]) // Its the last record to be stored
{
NSError *error;
if (![self.managedObjectContext save:&error])
{
// Handle the error...
NSLog(#"Error in updating %#",error);
}
self.isUpdateImageCalled = NO;
[self performSelectorOnMainThread:#selector(removeProgressMeter) withObject:nil waitUntilDone:NO];
}
// Update UI screen while in downloading process
[self performSelectorInBackground:#selector(updateProgressMeter:) withObject:[NSString stringWithFormat:#"%d",self.loopStartIndex+i+1]];
}
}
If I didn't release responseData then my app shows me memory warning and got crashed. If I released then, [NSConcreteMutableData release]: message sent to deallocated instance 0x1e931de0 error occures.
How to refine my code. Can any one suggest me on my code and rework and make a refined code.
Please please help me out.
Your responseData returned by sendSynchronousRequest is autoreleased thus you shouldn't release it yourself. For the first sight I don't see a memory leak in your code. It is possible that your application actually uses too much memory, without leaking it. Try to place an autorelease pool inside your for cycle:
for (...) {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
// your original code with a lot of autoreleased objects
[pool release];
}
If you wrap your code within an autorelease pool, all objects that are sent the autorelease message inside the wrap will be actually released when the pool itself is released: this way you purge the memory in every for cycle.
See also Using Autorelease Pools in the doc, it specifically mentions that you should use them in the case "if you write a loop that creates many temporary objects".