I am trying to change the exposure duration of the camera by reading a pan gesture:
AVCaptureDeviceInput* deviceInput = (AVCaptureDeviceInput*)input;
NSError* error;
if([deviceInput.device isExposurePointOfInterestSupported] && [deviceInput.device isExposureModeSupported:AVCaptureExposureModeCustom]) {
Float64 minExposure = CMTimeGetSeconds([deviceInput.device.activeFormat minExposureDuration]);
Float64 maxExposure = CMTimeGetSeconds([deviceInput.device.activeFormat maxExposureDuration]);
Float64 currentExposure = CMTimeGetSeconds([deviceInput.device exposureDuration]);
Float64 delta = translation.y * (maxExposure - minExposure) + minExposure;
Float64 newExposure = MIN(MAX(currentExposure + delta, minExposure), maxExposure);
[deviceInput.device addObserver:self forKeyPath:#"isAdjustingExposure" options:NSKeyValueObservingOptionNew context:nil];
[deviceInput.device lockForConfiguration:&error];
[deviceInput.device setExposureMode:AVCaptureExposureModeAutoExpose];
[deviceInput.device setExposurePointOfInterest:focusPoint];
[deviceInput.device setExposureModeCustomWithDuration:CMTimeMake(newExposure, 0) ISO:AVCaptureISOCurrent completionHandler:nil];
[deviceInput.device unlockForConfiguration];
}
Then based on this answer to a previous question I am also observing "isAdjustingExposure" in order to lock the exposure when it's set:
- (void)observeValueForKeyPath:(NSString *)keyPath
ofObject:(id)object
change:(NSDictionary<NSKeyValueChangeKey,id> *)change
context:(void *)context {
if([keyPath isEqualToString:#"isAdjustingExposure"]) {
NSNumber* value = [object valueForKey:#"isAdjustingExposure"];
if(value && ![value boolValue]) {
[object setExposureMode:AVCaptureExposureModeLocked];
}
}
}
But for some reason this method is never called. I am not sure if this last step is necessary, but anyways this is not working because I notice that the current exposure value stays always the same, and I can also see visually that the camera doesn't adjust its exposure.
The key name is "adjustingExposure" and not "isAdjustingExposure" and your exposure duration becomes 0/0 with the Float64 to int64_t conversion in CMTimeMake(), so try
CMTimeMakeWithSeconds(newExposure, 10000)
for your custom exposure duration
Related
I have a Metal-based application that utilizes AVFoundation for movie playback + seeking. To start, I am only processing .mov files and nothing else, and even the app in question will not process any other format. While it has been working effectively in the past, I recently received feedback from some M1 users about black frames only showing up on their app regardless of which time they set their seek bar to.
I have performed the following troubleshooting in my attempts find the root source for this black texture bug:
Verified that the video being processed is .mov movie file type.
Verified that the CVPixelBufferRef object returned from AVPlayerItemVideoOutput's -copyPixelBufferForItemTime: is valid, i.e. not nil.
Verified that the MTLTexture created from the CVPixelBufferRef is also valid, i.e. also not nil.
Converted the MTLTexture to a bitmap and saved it as a .JPEG image to the user's disk.
The last part is probably the most important step here as the images saved are also all black (for users experiencing the bug) when viewed using Finder, and made me come to the assumption that I might be using AVFoundation somewhat wrong. While I hope that my opening post might not be too long in regards to code, below are the following steps I am performing in order to process videos to be rendered using Metal for your reference.
Inside my reader class, I have the following properties:
VideoReaderClass.m
#property (retain) AVPlayer *vidPlayer;
#property (retain) AVPlayerItem *vidPlayerItem;
#property (retain) AVPlayerItemVideoOutput *playerItemVideoOutput;
#property (retain) AVMutableComposition *videoMutableComp;
#property (assign) AVPlayerItemStatus playerItemStatus;
// The frame duration of the composition, as well as the media being processed.
#property (assign) CMTime frameDuration;
// Tracks the time to insert a new footage to
#property (assign) CMTime startInsertTime;
// Weak reference to a delegate responsible for processing seek:
#property (weak) id<VideoReaderRenderer> vidReaderRenderer;
A method called before reading starts. Handles observer cleanup as well.
- (void)initializeReadMediaPipeline
{
[_lock lock];
_startInsertTime = kCMTimeZero;
_playerItemStatus = AVPlayerItemStatusUnknown;
if(_videoMutableComp)
{
[_videoMutableComp release];
}
_videoMutableComp = [AVMutableComposition composition];
[_videoMutableComp retain];
if(_vidPlayer)
{
[[_vidPlayer currentItem] removeObserver:self
forKeyPath:#"status"
context:MyAddAVPlayerItemKVOContext];
[[_vidPlayer currentItem] removeObserver:self
forKeyPath:#"playbackBufferFull"
context:MyAVPlayerItemBufferFullKVOContext];
[[_vidPlayer currentItem] removeObserver:self
forKeyPath:#"playbackBufferEmpty"
context:MyAVPlayerItemBufferEmptyKVOContext];
[[_vidPlayer currentItem] removeObserver:self
forKeyPath:#"playbackLikelyToKeepUp"
context:MyAVPlayerItemBufferKeepUpKVOContext];
[_vidPlayer release];
_vidPlayer = nil;
}
[_lock unlock];
}
In this class is a public method to be called in a background queue when the app should begin processing a video, or a set of videos.
- (BOOL)addMediaAtLocation:(NSURL *)location
{
BOOL result = NO;
[_lock lock];
NSDictionary* optsInfo =
#{
AVURLAssetPreferPreciseDurationAndTimingKey : #(YES)
};
AVURLAsset* assetURL = [AVURLAsset URLAssetWithURL:location options:optsInfo];
AVAssetTrack* assetTrack = [assetURL tracksWithMediaType:AVMediaTypeVideo].firstObject;
NSError* error = nil;
[_videoMutableComp insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetTrack.timeRange.duration)
ofAsset:assetURL
atTime:_startInsertTime
error:&error];
if(!error)
{
[_vidPlayerItem addObserver:self
forKeyPath:#"status"
options:NSKeyValueObservingOptionOld | NSKeyValueObservingOptionNew
context:MyAddAVPlayerItemKVOContext];
[_vidPlayerItem addObserver:self
forKeyPath:#"playbackBufferFull"
options:NSKeyValueObservingOptionOld | NSKeyValueObservingOptionNew
context:MyAVPlayerItemBufferFullKVOContext];
[_vidPlayerItem addObserver:self
forKeyPath:#"playbackBufferEmpty"
options:NSKeyValueObservingOptionOld | NSKeyValueObservingOptionNew
context:MyAVPlayerItemBufferEmptyKVOContext];
[_vidPlayerItem addObserver:self
forKeyPath:#"playbackLikelyToKeepUp"
options:NSKeyValueObservingOptionOld | NSKeyValueObservingOptionNew
context:MyAVPlayerItemBufferKeepUpKVOContext];
[_vidPlayer replaceCurrentItemWithPlayerItem:_vidPlayerItem];
}
_vidPlayer = [[AVPlayer alloc] init];
if(_playerItemVideoOutput)
{
[_playerItemVideoOutput release];
_playerItemVideoOutput = nil;
}
[_lock unlock];
}
Called externally by our view controller when seeking is needed.
- (void)seekToFrame:(CMTime)time
{
if(_vidReaderRenderer)
{
if(_vidPlayerItem && _playerItemVideoOutput)
{
[_vidPlayerItem seekTimeTime:time
toleranceBefore:kCMTimeZero
toleranceAfter:kCMTimeZero
completionHandler:^(BOOL finished){
if(finished)
{
CVPixelBufferRef p_buffer = [_playerItemVideoOutput copyPixelBufferForItemTime:time itemTimeForDisplay:nil];
if(p_buffer)
{
[_vidReaderRenderer seekOperationFinished:p_buffer];
}
}
}];
}
}
}
Lastly, this is where I handle the notification when the AVPlayerItem's status is set for "ReadyToPlay."
- (void)observeValueForKeyPath:(NSString *)keyPath
ofObject:(id)object
change:(NSDictionary<NSKeyValueChangeKey,id> *)change
context:(void *)context
{
// ... Checking for right contexts here, omitting for this example.
if([keyPath isEqualToString:#"status"])
{
AVPlayerItemStatus status = AVPlayerItemStatusUnknown;
// Get the status change from the change dictionary
NSNumber *statusNumber = change[NSKeyValueChangeNewKey];
if([statusNumber isKindOfClass:[NSNumber class]])
{
status = statusNumber.integerValue;
}
// Switch over the status
switch(status)
{
case AVPlayerItemStatusReadyToPlay:
{
// Ready to Play
if(_vidPlayerItem)
{
[_lock lock];
NSDictionary* attribs =
#{
(NSString*)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_64RGBAHalf)
};
_playerItemVideoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:attribs];
[_vidPlayerItem addOutput:_playerItemVideoOutput];
[_vidPlayer setRate:0.0];
_playerItemStatus = status;
[_lock unlock];
// "Wake up" the AVPlayer/AVPlayerItem here.
[self seekToFrame:kCMTimeZero];
}
break;
}
}
}
}
The code listing below is the class that also acts the delegate for a custom protocol called VideoReaderRenderer, which handles the seekToTime: completion block, as well as converting the pixel buffer to a MTLTexture:
RendererDelegate.m
At some point in its initialization and before performing any seek operations, I instantiate the CVMetalTextureCacheRef.
CVReturn ret_val = CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, _mtlDevice, nil, &_metalTextureCache);
The method in which my VideoReader class calls inside seekToTime's completion block:
-(void)seekOperationFinished:(CVPixelBufferRef)pixelBuffer
{
CVMetalTextureRef mtl_tex = NULL;
size_t w = CVPixelBufferGetWidth(pixelBuffer);
size_t h = CVPixelBufferGetHeight(pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
CVReturn ret_val = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_metalTextureCache,
pixelBuffer,
nil,
MTLPixelFormatRGBA16Float,
w,
h,
0,
&mtl_tex);
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
if(ret_val != kCVReturnSuccess)
{
if(mtl_tex != NULL)
{
CVBufferRelease(mtl_tex);
CVPixelBufferRelease(pixelBuffer);
}
return;
}
id<MTLTexture> inputTexture = CVMetalTextureGetTexture(mtl_tex);
if(!inputTexture)
return;
NSSize texSize = NSMakeSize(inputTexture.width, inputTexture.height);
_viewPortSize = (simd_uint2){(uint)texSize.width, (uint)texSize.height};
// Create the texture here.
[_textureLock lock];
if(NSEqualSizes(_projectSize, texSize))
{
if(!_inputFrameTex)
{
MTLTextureDescriptor* texDescriptor = [MTLTextureDescriptor new];
texDescriptor.width = texSize.width;
texDescriptor.height = texSize.height;
texDescriptor.pixelFormat = MTLPixelFormatRGBA16Float;
texDescriptor.usage = MTLTextureUsageShaderWrite | MTLTextureUsageShaderRead;
_inputFrameTex = [_mtlDevice newTextureWithDescriptor:texDescriptor];
}
id<MTLCommandBuffer> commandBuffer = [_drawCopyCommandQueue commandBuffer];
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer blitCommandEncoder];
[blitEncoder copyFromTexture:inputTexture
sourceSlice:0
sourceLevel:0
sourceOrigin:MTLOriginMake(0, 0, 0)
sourceSize:MTLSizeMake(inputTexture.width, inputTexture.height, 1)
toTexture:_inputFrameTex
destinationSlice:0
destinationLevel:0
destinationOrigin:MTLOriginMake(0, 0, 0)];
[blitEncoder endEncoding];
[commandBuffer commit];
[commandBuffer waitUntilCompleted];
}
[_textureLock unlock];
CVBufferRelease(mtl_tex);
CVPixelBufferRelease(pixelBuffer);
// Added "trouble-shooting" code to save the contents of the MTLTexture as a JPEG onto the user's disk.
if(_inputFrameTex)
{
// ... Save contents of texture to disk as JPEG.
}
}
Once again, my apologies for the rather long post. Additional details include having our rendering being displayed on a custom NSView backed by a CAMetalLayer, where its subsequent draw calls are called by a CVDisplayLink for background rendering. Though I don't think the latter seems to be the source of the problem of seekToTime: returning black frames. Can anyone care to shed some light to my situation? Thank you very much in advance.
I have a controller that is registered as an observer for a LOT of properties on views. This is our -observeValueForKeyPath:::: method:
-(void)observeValueForKeyPath:(NSString *)keyPath
ofObject:(id)object
change:(NSDictionary *)change
context:(void*)context
{
if( context == kStrokeColorWellChangedContext )
{
[self setValue:[change objectForKey:NSKeyValueChangeNewKey] forKey:kStrokeColorProperty];
}
else if( context == kFillColorWellChangedContext )
{
[self setValue:[change objectForKey:NSKeyValueChangeNewKey] forKey:kFillColorProperty];
}
else if( context == kBodyStyleNumChangedContext )
{
[self setValue:[change objectForKey:NSKeyValueChangeNewKey] forKey:kBodyStyleNumProperty];
}
else if( context == kStyleChangedContext )
{
[self setValue:[change objectForKey:NSKeyValueChangeNewKey] forKey:kStyleProperty];
}
else if( context == kStepStyleChangedContext )
{
[self setValue:[change objectForKey:NSKeyValueChangeNewKey] forKey:kStepStyleProperty];
}
else if( context == kFirstHeadStyleChangedContext )
{
[self setValue:[change objectForKey:NSKeyValueChangeNewKey] forKey:kFirstHeadStyleProperty];
}
else if( context == kSecondHeadStyleChangedContext )
{
[self setValue:[change objectForKey:NSKeyValueChangeNewKey] forKey:kSecondHeadStyleProperty];
}
And there's actually about 3x more of these else if statements.
One thing you can see is that each block has the same code, which makes me think that it's possible to optimize this.
My initial thought was to have an NSDictionary called keyPathForContextDictionary where the keys are the constants with the Context suffix (of type void*), and the values are the appropriate string constants, denoted by the Property suffix
Then this method would only need one line:
[self setValue:[change objectForKey:NSKeyValueChangeNewKey] forKey:keyPathForContextDictionary[context]];
Note that I need to use a data structure of some sort to identify which keyPath to use, and I can't simply use the keyPath argument passed into the method. This is because there are multiple views that have the same property I'm observing (for example, color wells have the color property). So each view needs to determine a unique keypath, which is currently being determined based off of the context
The problem with this is that you cannot use void* as keys in an NSDictionary. So... does anybody have any recommendations for what I could do here?
EDIT:
Here's an example of how the constants are defined:
void * const kStrokeColorWellChangedContext = (void*)&kStrokeColorWellChangedContext;
void * const kFillColorWellChangedContext = (void*)&kFillColorWellChangedContext;
void * const kBodyStyleNumChangedContext = (void*)&kBodyStyleNumChangedContext;
void * const kStyleChangedContext = (void*)&kStyleChangedContext;
NSString *const kStrokeColorProperty = #"strokeColor";
NSString *const kFillColorProperty = #"fillColor";
NSString *const kShadowProperty = #"shadow";
NSString *const kBodyStyleNumProperty = #"bodyStyleNum";
NSString *const kStyleProperty = #"style";
The type void * is not so much a type unto itself that you have to match, as it is "generic pointer". It's used for the context argument precisely so that you can use any underlying type that you like, including an object type. All you have to do is perform the proper casts.
You can therefore change your kTHINGYChangedContexts to be NSStrings or any other object you like very easily, and then use them as keys in your context->key path mapping.
Start with:
NSString * const kStrokeColorWellChangedContext = #"StrokeColorWellChangedContext";
When you register for observation, you must perform a bridged cast:
[colorWell addObserver:self
forKeyPath:keyPath
options:options
context:(__bridge void *)kStrokeColorWellChangedContext];
Then when the observation occurs, you do the reverse cast:
-(void)observeValueForKeyPath:(NSString *)keyPath
ofObject:(id)object
change:(NSDictionary *)change
context:(void*)ctx
{
NSString * context = (__bridge NSString *)ctx;
// Use context, not ctx, from here on.
}
And proceed to your key path lookup from there.
Josh Caswell had a great answer, but I didn't want to modify the type of our constants into NSStrings*
So a solution instead, was to cast the void* into NSValues w/ -valueWithPointer. This way I could use the void* as keys in my dictionary
Here's the code:
NSString *toolKeyPath = [[ToolController keyPathFromContextDictionary] objectForKey:[NSValue valueWithPointer:context]];
if( toolKeyPath )
{
if( [change objectForKey:NSKeyValueChangeNewKey] == (id)[NSNull null] )
{
[self setValue:nil forKey:toolKeyPath];
}
else
{
[self setValue:[change objectForKey:NSKeyValueChangeNewKey] forKey:toolKeyPath];
}
}
And the dictionary:
+(NSDictionary*) keyPathFromContextDictionary
{
return #{
[NSValue valueWithPointer:kStrokeColorWellChangedContext] : kStrokeColorProperty,
[NSValue valueWithPointer:kFillColorWellChangedContext] : kFillColorProperty,
[NSValue valueWithPointer:kBodyStyleNumChangedContext] : kBodyStyleNumProperty,
[NSValue valueWithPointer:kStyleChangedContext] : kStyleProperty,
[NSValue valueWithPointer:kStepStyleChangedContext] : kStepStyleProperty,
[NSValue valueWithPointer:kFirstHeadStyleChangedContext] : kFirstHeadStyleProperty,
[NSValue valueWithPointer:kSecondHeadStyleChangedContext] : kSecondHeadStyleProperty,
[NSValue valueWithPointer:kShadowChangedContext] : kShadowProperty,
[NSValue valueWithPointer:kStrokeWidthChangedContext] : kStrokeWidthProperty,
[NSValue valueWithPointer:kBlurRadiusChangedContext] : kBlurRadiusProperty,
[NSValue valueWithPointer:kFontSizeChangedContext] : kFontSizeProperty
};
}
I've got an observer on my textFields which looks to see if the "enabled# property has changed.
(void) observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *) context;
{
UITextField *txtField = (UITextField *)object;
BOOL new = [[change objectForKey:NSKeyValueChangeNewKey] boolValue];
BOOL old = [[change objectForKey:NSKeyValueChangeOldKey] boolValue];
if ((new != old) && (new = YES))
{
[self fadeDisable:txtField];
}
else if ((new != old) && (new = NO))
{
[self fadeEnable:txtField];
}
I thought if I used int new and int old, the 1 or 0 which defined if the property is enabled or not would be returned but when I use NSLog to see what thy are bringing back, it's a long string of numbers.
I looked through the documentation and it seems that objectForKey actually return an id not an integer but i'm not sure what to do.
Edit: i've added the code for my comparison which is trying to determine if it went from disabled to enabled (or vice versa). Also added the boolValue correction as recommended.
It does not give the intended result and doesn't call the correct method. Is it correct?
Thanks
NSDictionary contains objects (like NSNumber), not primitive types (like int). As you have noticed,
[change objectForKey:NSKeyValueChangeNewKey]
returns id. If you want to convert it to int, use
int new = [[change objectForKey:NSKeyValueChangeNewKey] intValue]
Or if the property is a BOOL, even better:
BOOL new = [[change objectForKey:NSKeyValueChangeNewKey] boolValue]
This line of code
int new = [change objectForKey:NSKeyValueChangeNewKey]
results in storing value of pointer to the NSNumber object into new integer, this is the "long string of numbers" you mentioned. Strangely, it does compile without even a warning.
I was wondering if there is any way to get a more accurate version of the contentOffset, or estimate/calculate the contentOffset or (preferably) the first derivative of contentOffset of a UIScrollView. I am trying to perform an action when the rate of change of the contentOffset of my UIScrollView is very small, but 0.5f isn't quite precise enough.
Any help is greatly appreciated.
You can't get better precision than the one provided by contentOffset. You could calculate velocity using regular ds/dt equation:
- (void)scrollViewDidScroll:(UIScrollView *)scrollView
{
static CGFloat prevPos = 0.0; //you can store those in iVars
static NSTimeInterval prevTime = 0.0;
CGFloat newPos = scrollView.contentOffset.y;
NSTimeInterval newTime = [NSDate timeIntervalSinceReferenceDate];
double v = (newPos - prevPos)/(newTime - prevTime);
prevPos = newPos;
prevTime = newTime;
}
However, if you are feeling extremely hacky, and you want you code to be unsafe, you can peek into UIScrollView's velocity iVars directly by using this category
#interface UIScrollView(UnsafeVelocity)
- (double) unsafeVerticalVelocty;
#end
#implementation UIScrollView(UnsafeVelocity)
- (double) unsafeVerticalVelocty
{
double returnValue = 0.0;
id verticalVel = nil;
#try {
verticalVel = [self valueForKey:#"_verticalVelocity"];
}
#catch (NSException *exception) {
NSLog(#"KVC peek failed!");
}
#finally {
if ([verticalVel isKindOfClass:[NSNumber class]]) {
returnValue = [verticalVel doubleValue];
}
}
return returnValue;
}
#end
To get horizontal velocity replace _verticalVelocity with _horizontalVelocity. Notice, that the values you will get seem to be scaled differently. I repeat once more: while this is (probably) the best value of velocity you can get, it is very fragile and not future-proof.
I'm working on a problem where I have to download around 10 different large files in a queue, and I need to display a progress bar indicating the status of the total transfer. I have this working just fine with ASIHTTPRequest in iOS4, but I'm trying to transition to AFNetworking since ASIHTTPRequest has issues in iOS5 and is no longer maintained.
I know you can report progress on individual requests using AFHTTPRequestOperation's downloadProgressBlock, but I can't seem to find a way to report overall progress of multiple requests that would be executed on the same NSOperationQueue.
Any suggestions? Thanks!
[operation setUploadProgressBlock:^(NSInteger bytesWritten, NSInteger totalBytesWritten, NSInteger totalBytesExpectedToWrite) {
NSLog(#"Sent %d of %d bytes", totalBytesWritten, totalBytesExpectedToWrite);
}];
Operation is AFHTTPRequestOperation
You can subclass AFURLConnectionOperation to have 2 new properties: (NSInteger)totalBytesSent, and (NSInteger)totalBytesExpectedToSend. You should set these properties in the NSURLConnection callback like so:
- (void)connection:(NSURLConnection *)__unused connection
didSendBodyData:(NSInteger)bytesWritten
totalBytesWritten:(NSInteger)totalBytesWritten
totalBytesExpectedToWrite:(NSInteger)totalBytesExpectedToWrite
{
[super connection: connection didSendBodyData:bytesWritten totalBytesWritten:totalBytesWritten totalBytesExpectedToWrite:totalBytesExpectedToWrite];
self.totalBytesSent = totalBytesWritten;
self.totalBytesExpectedToSend = totalBytesExpectedToSend;
}
Your uploadProgress block may look like this:
……(NSInteger bytesWritten, NSInteger totalBytesWritten, NSInteger totalBytesExpectedToWrite) {
NSInteger queueTotalExpected = 0;
NSInteger queueTotalSent = 0;
for (AFURLConnectionOperation *operation in self.operationQueue) {
queueTotalExpected += operation.totalBytesExpectedToSend;
queueTotalSent += operation.totalBytesSent;
}
self.totalProgress = (double)queueTotalSent/(double)queueTotalExpected;
}];
I would try subclassing UIProgressView with a subclass that keeps track of all the different items you are watching and then has logic that adds the progress of them all together.
With code like this perhaps:
#implementation customUIProgressView
-(void) updateItem:(int) itemNum ToPercent:(NSNumber *) percentDoneOnItem {
[self.progressQueue itemAtIndexPath:itemNum] = percentDoneOnItem;
[self updateProgress];
}
-(void) updateProgress {
float tempProgress = 0;
for (int i=1; i <= [self.progressQueue count]; i++) {
tempProgress += [[self.progressQueue itemAtIndexPath:itemNum] floatValue];
}
self.progress = tempProgress / [self.progressQueue count];
}