I am trying to achieve something like this using Vision and ARKit, so my idea is to get landmark points from Vision and deploy node using those points. I am using this demo as a reference. To date, I have been able to find the landmark points of the face using Vision. Now to use those points in ARKit to add nodes to the scene. I am unable to get the depth, which is essential for the node's position.
After searching SO, I found this post to convert CGPoint to SCNVector3, but here I am having an issue as I don't having any reference plane which is can use to get depth by hit testing against.
So, how can I get the perfect depth using CGPoints, other than using hitTest, or is there any other way I can achieve those result shown in video.
Here is the code which is implemented
CGPoint faceRectCenter = (CGPoint){
CGRectGetMidX(faceRect),CGRectGetMidY(faceRect)
}; // faceRect is detected face bounding box
__block NSMutableArray<ARHitTestResult* >* testResults = [NSMutableArray new];
void(^hitTest)(void) = ^{
NSArray<ARHitTestResult* >* hitTestResults = [self.sceneView hitTest:faceRectCenter types:ARHitTestResultTypeFeaturePoint];
if(hitTestResults.count > 0){
//get the first
ARHitTestResult* firstResult = nil;
for (ARHitTestResult* result in hitTestResults) {
if (result.distance > 0.10) {
firstResult = result;
[testResults addObject:firstResult];
break;
}
}
}
};
for(int i=0; i<3; i++){
hitTest();
}
if(testResults.count > 0){
NSLog(#"%#", testResults);
SCNVector3 postion = averagePostion([testResults copy]);
NSLog(#"<%.1f,%.1f,%.1f>",postion.x,postion.y,postion.z);
__block SCNNode* textNode = [ARTextNode nodeWithText:name Position:postion];
SCNVector3 plane = [self.sceneView projectPoint:textNode.position];
float projectedDepth = plane.z;
NSLog(#"projectedDepth: %f",projectedDepth);
dispatch_async(dispatch_get_main_queue(), ^{
[self.sceneView.scene.rootNode addChildNode:textNode];
[textNode show];
});
}
else{
// NSLog(#"HitTest invalid");
}
}
Any help will be great!!
Related
My current method is:
CGDataProviderRef provider = CGImageGetDataProvider(imageRef);
imageData.rawData = CGDataProviderCopyData(provider);
imageData.imageData = (UInt8 *) CFDataGetBytePtr(imageData.rawData);
I only get about 30 frames per second. I know part of the performance hit is copying the data, it'd be nice if I could just have access to the stream of bytes and not have it automatically create a copy for me.
I'm trying to get it to process CGImageRefs as fast as possible, is there a faster way?
Here's my working solutions snippet:
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
// Insert code here to initialize your application
//timer = [NSTimer scheduledTimerWithTimeInterval:1.0/60.0 //2000.0
// target:self
// selector:#selector(timerLogic)
// userInfo:nil
// repeats:YES];
leagueGameState = [LeagueGameState new];
[self updateWindowList];
lastTime = CACurrentMediaTime();
// Create a capture session
mSession = [[AVCaptureSession alloc] init];
// Set the session preset as you wish
mSession.sessionPreset = AVCaptureSessionPresetMedium;
// If you're on a multi-display system and you want to capture a secondary display,
// you can call CGGetActiveDisplayList() to get the list of all active displays.
// For this example, we just specify the main display.
// To capture both a main and secondary display at the same time, use two active
// capture sessions, one for each display. On Mac OS X, AVCaptureMovieFileOutput
// only supports writing to a single video track.
CGDirectDisplayID displayId = kCGDirectMainDisplay;
// Create a ScreenInput with the display and add it to the session
AVCaptureScreenInput *input = [[AVCaptureScreenInput alloc] initWithDisplayID:displayId];
input.minFrameDuration = CMTimeMake(1, 60);
//if (!input) {
// [mSession release];
// mSession = nil;
// return;
//}
if ([mSession canAddInput:input]) {
NSLog(#"Added screen capture input");
[mSession addInput:input];
} else {
NSLog(#"Couldn't add screen capture input");
}
//**********************Add output here
//dispatch_queue_t _videoDataOutputQueue;
//_videoDataOutputQueue = dispatch_queue_create( "com.apple.sample.capturepipeline.video", DISPATCH_QUEUE_SERIAL );
//dispatch_set_target_queue( _videoDataOutputQueue, dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_HIGH, 0 ) );
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
videoOut.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
[videoOut setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// RosyWriter records videos and we prefer not to have any dropped frames in the video recording.
// By setting alwaysDiscardsLateVideoFrames to NO we ensure that minor fluctuations in system load or in our processing time for a given frame won't cause framedrops.
// We do however need to ensure that on average we can process frames in realtime.
// If we were doing preview only we would probably want to set alwaysDiscardsLateVideoFrames to YES.
videoOut.alwaysDiscardsLateVideoFrames = YES;
if ( [mSession canAddOutput:videoOut] ) {
NSLog(#"Added output video");
[mSession addOutput:videoOut];
} else {NSLog(#"Couldn't add output video");}
// Start running the session
[mSession startRunning];
NSLog(#"Set up session");
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
//NSLog(#"Captures output from sample buffer");
//CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription( sampleBuffer );
/*
if ( self.outputVideoFormatDescription == nil ) {
// Don't render the first sample buffer.
// This gives us one frame interval (33ms at 30fps) for setupVideoPipelineWithInputFormatDescription: to complete.
// Ideally this would be done asynchronously to ensure frames don't back up on slower devices.
[self setupVideoPipelineWithInputFormatDescription:formatDescription];
}
else {*/
[self renderVideoSampleBuffer:sampleBuffer];
//}
}
- (void)renderVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
//CVPixelBufferRef renderedPixelBuffer = NULL;
//CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
//[self calculateFramerateAtTimestamp:timestamp];
// We must not use the GPU while running in the background.
// setRenderingEnabled: takes the same lock so the caller can guarantee no GPU usage once the setter returns.
//#synchronized( _renderer )
//{
// if ( _renderingEnabled ) {
CVPixelBufferRef sourcePixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
const int kBytesPerPixel = 4;
CVPixelBufferLockBaseAddress( sourcePixelBuffer, 0 );
int bufferWidth = (int)CVPixelBufferGetWidth( sourcePixelBuffer );
int bufferHeight = (int)CVPixelBufferGetHeight( sourcePixelBuffer );
size_t bytesPerRow = CVPixelBufferGetBytesPerRow( sourcePixelBuffer );
uint8_t *baseAddress = CVPixelBufferGetBaseAddress( sourcePixelBuffer );
int count = 0;
for ( int row = 0; row < bufferHeight; row++ )
{
uint8_t *pixel = baseAddress + row * bytesPerRow;
for ( int column = 0; column < bufferWidth; column++ )
{
count ++;
pixel[1] = 0; // De-green (second pixel in BGRA is green)
pixel += kBytesPerPixel;
}
}
CVPixelBufferUnlockBaseAddress( sourcePixelBuffer, 0 );
//NSLog(#"Test Looped %d times", count);
CIImage *ciImage = [CIImage imageWithCVImageBuffer:sourcePixelBuffer];
/*
CIContext *temporaryContext = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext] graphicsPort]
options: nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(sourcePixelBuffer),
CVPixelBufferGetHeight(sourcePixelBuffer))];
*/
//UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
// Create a bitmap rep from the image...
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCIImage:ciImage];
// Create an NSImage and add the bitmap rep to it...
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
// Set the output view to the new NSImage.
[imageView setImage:image];
//CGImageRelease(videoImage);
//renderedPixelBuffer = [_renderer copyRenderedPixelBuffer:sourcePixelBuffer];
// }
// else {
// return;
// }
//}
//Profile code? See how fast it's running?
if (CACurrentMediaTime() - lastTime > 3) //10 seconds
{
float time = CACurrentMediaTime() - lastTime;
[fpsText setStringValue:[NSString stringWithFormat:#"Elapsed Time: %f ms, %f fps", time * 1000 / loopsTaken, (1000.0)/(time * 1000.0 / loopsTaken)]];
lastTime = CACurrentMediaTime();
loopsTaken = 0;
[self updateWindowList];
if (leagueGameState.leaguePID == -1) {
[statusText setStringValue:#"No League Instance Found"];
}
}
else
{
loopsTaken++;
}
}
I get a very nice 60 frames per second even after looping through the data.
It captures the screen, I get the data, I modify the data and I re-show the data.
Which "stream of bytes" do you mean? CGImage represents the final bitmap data, but under the hood it may still be compressed. The bitmap may currently be stored on the GPU, so getting to it might require a GPU->CPU fetch (which is expensive, and should be avoided when you don't need it).
If you're trying to do this at greater than 30fps, you may want to rethink how you're attacking the problem, and use tools designed for that, like Core Image, Core Video, or Metal. Core Graphics is optimized for display, not processing (and definitely not real-time processing). A key difference in tools like Core Image is that you can perform more of your work on the GPU without shuffling data back to the CPU. This is absolutely critical for maintaining fast pipelines. Whenever possible, you want to avoid getting the actual bytes.
If you have a CGImage already, you can convert it to a CIImage with imageWithCGImage: and then use CIImage to process it further. If you really need access to the bytes, your options are the one you're using, or to render it into a bitmap context (which also will require copying) with CGContextDrawImage. There's just no promise that a CGImage has a bunch of bitmap bytes hanging around at any given time that you can look at, and it doesn't provide "lock your buffer" methods like you'll find in real-time frameworks like Core Video.
Some very good introductions to high-speed image processing from WWDC videos:
WWDC 2013 Session 509 Core Image Effects and Techniques
WWDC 2014 Session 514 Advances in Core Image
WWDC 2014 Sessions 603-605 Working with Metal
I am using SpriteKit.
The code below basically makes a lattice of dots on the screen. However, I want to call each 'dot' a different name based on its position, so that I can access each dot individually in another method. I'm struggling a little on this, so would really appreciate if someone could point me in the right direction.
#define kRowCount 8
#define kColCount 6
#define kDotGridSpacing CGSizeMake (50,-50)
#import "BBMyScene.h"
#implementation BBMyScene
-(id)initWithSize:(CGSize)size {
if (self = [super initWithSize:size]) {
/* Setup your scene here */
// Background
self.backgroundColor = [SKColor colorWithRed:0.957 green:0.957 blue:0.957 alpha:1]; /*#f4f4f4*/
CGPoint baseOrigin = CGPointMake(35, 385);
for (NSUInteger row = 0; row < kRowCount; ++row) {
CGPoint dotPosition = CGPointMake(baseOrigin.x, row * (kDotGridSpacing.height) + baseOrigin.y);
for (NSUInteger col = 0; col < kColCount; ++col) {
SKSpriteNode *dot = [SKSpriteNode spriteNodeWithImageNamed:#"dot"];
dot.position = dotPosition;
[self addChild:dot];
//6
dotPosition.x += kDotGridSpacing.width;
}
}
}
return self;
}
Here is an image of what appears on screen when I run the above code...
http://cl.ly/image/3q2j3E0p1S1h/Image1.jpg
I simply want to be able to call an individual dot to do something when there is some form of user interaction, and I'm not sure how I would do that without each dot having a different name.
If anyone could help I would really appreciate it.
Thanks,
Ben
- (void)update:(NSTimeInterval)currentTime {
for(SKNode *node in self.children){
if ([node.name containsString:#"sampleNodeName"]) {
[node removeFromParent];
}
}
}
Hope this one helps!
You can set the name property of each node inside the loop.
Then you can access them with self.children[index].
If you want to find a specific node in your children, you have to enumerate through the array.
Update:
To clarify how to search for an item by iterating, here is a helper method:
- (SKNode *)findNodeNamed:(NSString *)nodeName
{
SKNode *nodeToFind = nil;
for(SKNode *node in self.children){
if([node.name isEqualToString:nodeName]){
nodeToFind = node;
break;
}
}];
return nodeToFind;
}
I Am writing a video using UIImages. I can successfully write movie.
The problem is that when i play the video it show first few frames as green.
Could any one of you help me. I am new to ios.
Thank You
I think the fps (frame per second) of your video will be low. In this case, I think you need not to call CVPixelBufferPoolCreatePixelBuffer. This calling makes the first empty frame and the base green screen is shown.
for(UIImage * img in imageArray)
{
buffer = [self pixelBufferFromCGImage:[img CGImage] andSize:size];
if( frameCount == 0 ) {
// Please comment out the following line
// CVPixelBufferPoolCreatePixelBuffer (NULL, adaptor.pixelBufferPool, &buffer);
}
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30)
{
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
I've been pulling my hair out over this.
I've found a few things here, but nothing actually seems to work. And the documentation is really limited.
What I'm trying to figure out here is how to get the start time code of a Quicktime movie in Objective-C from the timecode track, and getting a human-readable output from that.
I've found this:
SMPTE TimeCode from Quick Time
It works perfectly in 32-bit mode. But it doesn't work in 64-bit mode because of the Quicktime API. The software I need to incorporate it into already has been and must continue to run 64-bit.
I'm losing my mind here. Anyone out there know about these APIs?
Ultimately, the goal here is to figure out the start timecode of the Quicktime because its needed to set the OFFSET in FCP-X XML files. Without it, the video files are brought in without audio (or, really, its just slipped a lot).
Use AVFoundation framework instead of QuickTime. The player initialisation is well explained in the documentation: https://developer.apple.com/library/mac/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/02_Playback.html#//apple_ref/doc/uid/TP40010188-CH3-SW2
Once your AVAsset is loaded in memory, you can extract the first sample frame number (timeStampFrame) by reading the content of the timecode track if present:
long timeStampFrame = 0;
for (AVAssetTrack * track in [_asset tracks]) {
if ([[track mediaType] isEqualToString:AVMediaTypeTimecode]) {
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:_asset error:nil];
AVAssetReaderTrackOutput *assetReaderOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:nil];
if ([assetReader canAddOutput:assetReaderOutput]) {
[assetReader addOutput:assetReaderOutput];
if ([assetReader startReading] == YES) {
int count = 0;
while ( [assetReader status]==AVAssetReaderStatusReading ) {
CMSampleBufferRef sampleBuffer = [assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer == NULL) {
if ([assetReader status] == AVAssetReaderStatusFailed)
break;
else
continue;
}
count++;
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t length = CMBlockBufferGetDataLength(blockBuffer);
if (length>0) {
unsigned char *buffer = malloc(length);
memset(buffer, 0, length);
CMBlockBufferCopyDataBytes(blockBuffer, 0, length, buffer);
for (int i=0; i<length; i++) {
timeStampFrame = (timeStampFrame << 8) + buffer[i];
}
free(buffer);
}
CFRelease(sampleBuffer);
}
if (count == 0) {
NSLog(#"No sample in the timecode track: %#", [assetReader error]);
}
NSLog(#"Processed %d sample", count);
}
}
if ([assetReader status] != AVAssetReaderStatusCompleted)
[assetReader cancelReading];
}
}
This is a little more tricky than the QuickTime API and there must be some improvement to the code above but it works for me.
I have a MKMapView (obviously), that shows housing locations around the user.
I have a Radius tool that when a selection is made, the annotations should add/remove based on distance around the user.
I have it add/removing fine but for some reason the annotations won't show up until I zoom in or out.
This is the method that adds/removes the annotations based on distance. I have tried two different variations of the method.
Adds the new annotations to an array, then adds to the map by [mapView addAnnotations:NSArray].
Add the annotations as it finds them using [mapView addAnnotation:MKMapAnnotation];
1.
- (void)updateBasedDistance:(NSNumber *)distance {
//Setup increment for HUD animation loading
float hudIncrement = ( 1.0f / [[[[self appDelegate] rssParser]rssItems] count]);
//Remove all the current annotations from the map
[self._mapView removeAnnotations:self._mapView.annotations];
//Hold all the new annotations to add to map
NSMutableArray *tempAnnotations;
/*
I have an array that holds all the annotations on the map becuase
a lot of filtering/searching happens. So for memory reasons it is
more efficient to load annoations once then add/remove as needed.
*/
for (int i = 0; i < [annotations count]; i++) {
//Current annotations location
CLLocation *tempLoc = [[CLLocation alloc] initWithLatitude:[[annotations objectAtIndex:i] coordinate].latitude longitude:[[annotations objectAtIndex:i] coordinate].longitude];
//Distance of current annotaiton from user location converted to miles
CLLocationDistance miles = [self._mapView.userLocation.location distanceFromLocation:tempLoc] * 0.000621371192;
//If distance is less than user selection, add it to the map.
if (miles <= [distance floatValue]){
if (tempAnnotations == nil)
tempAnnotations = [[NSMutableArray alloc] init];
[tempAnnotations addObject:[annotations objectAtIndex:i]];
}
//For some reason, even with ARC, helps a little with memory consumption
tempLoc = nil;
//Update a progress HUD I use.
HUD.progress += hudIncrement;
}
//Add the new annotaitons to the map
if (tempAnnotations != nil)
[self._mapView addAnnotations:tempAnnotations];
}
2.
- (void)updateBasedDistance:(NSNumber *)distance {
//Setup increment for HUD animation loading
float hudIncrement = ( 1.0f / [[[[self appDelegate] rssParser]rssItems] count]);
//Remove all the current annotations from the map
[self._mapView removeAnnotations:self._mapView.annotations];
/*
I have an array that holds all the annotations on the map becuase
a lot of filtering/searching happens. So for memory reasons it is
more efficient to load annoations once then add/remove as needed.
*/
for (int i = 0; i < [annotations count]; i++) {
//Current annotations location
CLLocation *tempLoc = [[CLLocation alloc] initWithLatitude:[[annotations objectAtIndex:i] coordinate].latitude longitude:[[annotations objectAtIndex:i] coordinate].longitude];
//Distance of current annotaiton from user location converted to miles
CLLocationDistance miles = [self._mapView.userLocation.location distanceFromLocation:tempLoc] * 0.000621371192;
//If distance is less than user selection, add it to the map.
if (miles <= [distance floatValue])
[self._mapView addAnnotation:[annotations objectAtIndex:i]];
//For some reason, even with ARC, helps a little with memory consumption
tempLoc = nil;
//Update a progress HUD I use.
HUD.progress += hudIncrement;
}
}
I have also attempted at the end of the above method:
[self._mapView setNeedsDisplay];
[self._mapView setNeedsLayout];
Also, to force a refresh (saw somewhere it might work):
self._mapView.showsUserLocation = NO;
self._mapView.showsUserLocation = YES;
Any help would be very much appreciated and as always, thank you for taking the time to read.
I'm going to guess that updateBasedDistance: gets called from a background thread. Check with NSLog(#"Am I in the UI thread? %d", [NSThread isMainThread]);. If it's 0, then you should move the removeAnnotations: and addAnnotation: to a performSelectorOnMainThread: invocation, or with GCD blocks on the main thread.