motion sensor, reading rotations - objective-c

I have tried this project on both android and ios with little success. There is a good chance that this stuff is just over my head. However I figured I would post my question on here as a last effort.
I'm trying to figure out when a device is rotated or flipped. My app should know when it did a 180, 360 or if the device was flipped vertically.
In an attempt to understand the way its suppose to work I tried downloading two example projects: AccelerometerGraph and CoreMotionTeapot. With these and a mix of other stuff I have figured out I was trying this:
motionManager = [[CMMotionManager alloc] init];
motionManager.accelerometerUpdateInterval = 0.01;
motionManager.deviceMotionUpdateInterval = 0.01;
[motionManager startDeviceMotionUpdates];
if (motionManager.gyroAvailable) {
motionManager.gyroUpdateInterval = 1.0/60.0;
motionManager.deviceMotionUpdateInterval = 0.01;
[motionManager startGyroUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMGyroData *gyroData, NSError *error)
{
CMRotationRate rotate = gyroData.rotationRate;
NSLog(#"rotation rate = [%f, %f, %f]", rotate.x, rotate.y, rotate.z);
}];
} else {
NSLog(#"No gyroscope on device.");
}
But I do not know how to gather the requested information(horizontal and vertical rotations) from these three values (x ,y, z).

What you're attempting is not trivial, but is certainly possible. This video should be very helpful in understanding the capabilities of the device and how to get closer to your goal:
http://www.youtube.com/watch?v=C7JQ7Rpwn2k
While he's talking about Android, the same concepts apply to the iPhone.

from the apple's documentation : CMMotionManager Class Reference (sorry lot of reading, i've bolded some sentences for quick over-reading)
After creating an instance of CMMotionManager, an application can use it to receive four types of motion: raw accelerometer data, raw gyroscope data, raw magnetometer data, and processed device-motion data (which includes accelerometer, rotation-rate, and attitude measurements). The processed device-motion data provided by Core Motion’s sensor fusion algorithms gives the device’s attitude, rotation rate, calibrated magnetic fields, the direction of gravity, and the acceleration the user is imparting to the device.
Important An application should create only a single instance of the CMMotionManager class. Multiple instances of this class can affect the rate at which data is received from the accelerometer and gyroscope.
An application can take one of two approaches when receiving motion data, by handling it at specified update intervals or periodically sampling the motion data. With both of these approaches, the application should call the appropriate stop method (stopAccelerometerUpdates, stopGyroUpdates, stopMagnetometerUpdates, and stopDeviceMotionUpdates) when it has finished processing accelerometer, rotation-rate, magnetometer, or device-motion data.
Handing Motion Updates at Specified Intervals
To receive motion data at specific intervals, the application calls a “start” method that takes an operation queue (instance of NSOperationQueue) and a block handler of a specific type for processing those updates. The motion data is passed into the block handler. The frequency of updates is determined by the value of an “interval” property.
Accelerometer. Set the accelerometerUpdateInterval property to specify an update interval. Call the startAccelerometerUpdatesToQueue:withHandler: method, passing in a block of type CMAccelerometerHandler. Accelerometer data is passed into the block as CMAccelerometerData objects.
Gyroscope. Set the gyroUpdateInterval property to specify an update interval. Call the startGyroUpdatesToQueue:withHandler: method, passing in a block of typeCMGyroHandler. Rotation-rate data is passed into the block as CMGyroData objects.
Magnetometer. Set the magnetometerUpdateInterval property to specify an update interval. Call the startMagnetometerUpdatesToQueue:withHandler: method, passing a block of type CMMagnetometerHandler. Magnetic-field data is passed into the block as CMMagnetometerData objects.
Device motion. Set the deviceMotionUpdateInterval property to specify an update interval. Call the or startDeviceMotionUpdatesUsingReferenceFrame:toQueue:withHandler: or startDeviceMotionUpdatesToQueue:withHandler: method, passing in a block of type CMDeviceMotionHandler. With the former method (new in iOS 5.0), you can specify a reference frame to be used for the attitude estimates. Rotation-rate data is passed into the block as CMDeviceMotion objects.
Periodic Sampling of Motion Data
To handle motion data by periodic sampling, the application calls a “start” method taking no arguments and periodically accesses the motion data held by a property for a given type of motion data. This approach is the recommended approach for applications such as games. Handling accelerometer data in a block introduces additional overhead, and most game applications are interested only the latest sample of motion data when they render a frame.
Accelerometer. Call startAccelerometerUpdates to begin updates and periodically access CMAccelerometerData objects by reading the accelerometerData property.
Gyroscope. Call startGyroUpdates to begin updates and periodically access CMGyroData objects by reading the gyroData property.
Magnetometer. Call startMagnetometerUpdates to begin updates and periodically access CMMagnetometerData objects by reading the magnetometerData property.
Device motion. Call the startDeviceMotionUpdatesUsingReferenceFrame: or startDeviceMotionUpdates method to begin updates and periodically access CMDeviceMotion objects by reading the deviceMotion property. The startDeviceMotionUpdatesUsingReferenceFrame: method (new in iOS 5.0) lets you specify a reference frame to be used for the attitude estimates.
About gathering the data :
#property(readonly) CMGyroData *gyroData
Discussion
If no gyroscope data is available, the value of this property is nil. An application that is receiving gyroscope data after calling startGyroUpdates periodically checks the value of this property and processes the gyroscope data.
So you should have something like
gyroData.rotationRate.x
gyroData.rotationRate.y
gyroData.rotationRate.z
by storing them and comparing them periodically you should be able to see if the device flipped around an axis, etc.

It all depends on the iPhone position. Say, if the phone gets flipped 360 around the y axis, the compass won't change 'cos it will still be pointing the same way during the flip. And that's not just it. My hint is that you log the accelerometer and compare the data you've collected with the movement made, and then, identify the stages of the trick and make a list of stages for each trick.

Then maybe what you're looking for is just the device orientation. You should look at the UIDevice Class Reference. In particular the
– beginGeneratingDeviceOrientationNotifications
– endGeneratingDeviceOrientationNotifications
methods.
and use it like this :
[UIDevice currentDevice].orientation
You'll get in return these possible values :
typedef enum {
UIDeviceOrientationUnknown,
UIDeviceOrientationPortrait,
UIDeviceOrientationPortraitUpsideDown,
UIDeviceOrientationLandscapeLeft,
UIDeviceOrientationLandscapeRight,
UIDeviceOrientationFaceUp,
UIDeviceOrientationFaceDown
} UIDeviceOrientation;
So you'll be able to check if it's in portrait (up or down) or landscape (left or right) and if it has been flipped.
You'll be able to implement the following methods :
- willRotateToInterfaceOrientation
- didRotateToInterfaceOrientation
You can look in this link to check how you can implement the methods.

Related

Reusing a CGContext causing odd performance losses

My class is rendering images offscreen. I thought reusing the CGContext instead of creating the same context again and again for every image would be a good thing. I set a member variable _imageContext so I would only have to create a new context if _imageContext is nil like so:
if(!_imageContext)
_imageContext = [self contextOfSize:imageSize];
instead of:
CGContextRef imageContext = [self contextOfSize:imageSize];
Of course I do not release the CGContext anymore.
These are the only changes I made, turns out that reusing the context slowed down rendering from about 10ms to 60ms. Have I missed something? Do I have to clear the context or something before drawing into it again? Or is it the correct way to recreate the context for each image?
EDIT
Found the weirdest connection..
While I was searching for the reason why the app's memory is incredibly increasing when the app starts rendering the images, I found the problem was where I set the rendered image to an NSImageView.
imageView.image = nil;
imageView.image = [[NSImage alloc] initWithCGImage:_imageRef size:size];
It looks like ARC is not releasing the previous NSImage. First way to avoid that was to draw the new image into the old one.
[imageView.image lockFocus];
[[[NSImage alloc] initWithCGImage:_imageRef size:size] drawInRect:NSMakeRect(0, 0, size.width, size.height) fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[imageView.image unlockFocus];
[imageView setNeedsDisplay];
The memory problem was gone and what happened to the CGContext-reuse problem?
Not reusing the context now takes 20ms instead of 10ms - of course drawing into an image takes longer than just setting it.
Reusing the context also takes 20ms instead of 60ms. But why? I don't see that there could be any connection, but I can reproduce the old state where reusing takes more time just by setting the NSImageView's image instead of drawing it.
I investigated this, and I observe the same slowdown. Looking with Instruments set to sample kernel calls as well as userland calls shows the culprit. #RyanArtecona's comment was on the right track. I focused Instruments in on the bottom most userland call CGSColorMaskCopyARGB8888_sse in two test runs (one reusing contexts, the other making a new one every time), and then inverted the resulting call tree. In the case where the context is not reused, I see that the heaviest kernel trace is:
Running Time Self Symbol Name
668.0ms 32.3% 668.0 __bzero
668.0ms 32.3% 0.0 vm_fault
668.0ms 32.3% 0.0 user_trap
668.0ms 32.3% 0.0 CGSColorMaskCopyARGB8888_sse
This is the kernel zeroing out pages of memory that are being faulted in by virtue of CGSColorMaskCopyARGB8888_sse accessing them. What this means is that the CGContext maps VM pages to back the bitmap context but the kernel doesn't actually do the work associated with that operation until someone actually accesses that memory. The actual mapping/fault happens on first access.
Now let's look at the heaviest kernel trace when we DO reuse the context:
Running Time Self Symbol Name
1327.0ms 35.0% 1327.0 bcopy
1327.0ms 35.0% 0.0 user_trap
1327.0ms 35.0% 0.0 CGSColorMaskCopyARGB8888_sse
This is the kernel copying pages. My money would be on this being the underlying copy-on-write mechanism that delivers the behavior #RyanArtecona was talking about in his comment:
In the Apple docs for CGBitmapContextCreateImage, it says the actual
bit-copying operation doesn't happen until more drawing is done on the
original context.
In the contrived case I used to test, the non-reuse case took 3392ms to execute and the reuse case took 4693ms (significantly slower). Considering just the single heaviest trace from each case, the kernel trace indicates that we spend 668.0ms zero filling new pages on the first access, and 1327.0ms writing into the copy-on-write pages on the first write after the image gets a reference to those pages. This is a difference of 659ms. This one difference alone accounts for ~50% of the gap between the two cases.
So, to distill it down a little, the non-reused context is faster because when you create the context it knows the pages are empty, and there's no one else with a reference to those pages to force them to be copied when you write to them. When you reuse the context, the pages are referenced by someone else (the image you created) and must be copied on the first write, so as to preserve the state of the image when the state of the context changes.
You could further explore what's going on here by looking at the virtual memory map of the process as you step through in the debugger. vmmap is the helpful tool for that.
Practically speaking, you should probably just create a new CGContext every time.
To complement #ipmcc's excellent and thorough answer, here is an instructional overview.
In the Apple docs for CGBitmapContextCreateImage it is stated:
The CGImage object returned by this function is created by a copy
operation. In some cases the copy
operation actually follows copy-on-write semantics, so that the actual
physical copy of the bits occur only if the underlying data in the
bitmap graphics context is modified.
So, when this function is called, the image's underlying bits may not be copied right away, and may instead wait to be copied when the bitmap context is next modified. This bit-copying may be expensive (depending on the size and colorspace of the context), and may disguise itself in an Instruments profile as part of whatever CGContext... drawing function that gets called next on the context (when the bits are forced to copy). This is probably what is happening here with CGContextDrawImage.k
However, the docs go on to say this:
As a consequence, you may want to use the resulting image and release
it before you perform additional drawing into the bitmap graphics
context. In this way, you can avoid the actual physical copy of the
data.
This implies that if you will be finished using the in-memory created image (i.e. it has been saved to disk, sent over the network, etc.) by the time you need to do more drawing in the context, the image would never need to be physically copied at all!
TL;DR
If at some point you need to pull a CGImage out of a bitmap context, and you won't need to keep any references to it (including setting it as a UIImageView's image) before you do any more drawing in the context, then it is probably a good idea to use CGBitmapContextCreateImage. If not, your image will be physically copied at some point, which may take a while, and it may be better to just use a new context each time.

iOS AVFoundation to stream video into OpenGL Texture

According to:
Can I use AVFoundation to stream downloaded video frames into an OpenGL ES texture?
It's posible to get the frames from an remote media. However I've been trying the suggestion, but the documentation about the use of AVPlayerItemVideoOutput is not very clear, it seems to have a delegate method outputMediaDataWillChange, which have the pointer to the AVPlayerItemVideoOutput instance.
Maybe I'm doing a wrong assumption , but this delegate method it's called every time the data will change?. It is the right place to get the CVPixelBuffer?.
Method outputMediaDataWillChange will only be called after registering requestNotificationOfMediaDataChangeWithAdvanceInterval, usually when you will pause your app or the like.
You can access the pixel buffer in your display link hook. Look for hasNewPixelBufferForItemTime and copyPixelBufferForItemTime in Apple sample (it's for OS X, but basically it's the same for iOS.
Probably not. You will need to update the texture on the same thread as your GL is doing all the work or some other thread with shared context, not on the thread you get a delegate callback that the media data has been updated. You could put some boolean value to true in this callback to notify the GL thread that the buffer is ready and it should collect it. Alternatively you could push some "target selector pair" to be performed to on the GL thread to collect the data (system like "performSelectorOnMainThread") but then again you should ask yourself if such pair already exists on the stack in case that media update is changing data faster then your GL is refreshing... I any case if you use that delegate and not handle it correctly it will not update the texture at all or it will block your GL thread.
I think, you should use ffmpeg library, as this library can connect any streaming server and get the picture in raw data. After that you can do anything with that pic.

multi track mp3 playback for iOS application

I am doing an application that involves playing back a song in a multi track format (drums, vocals, guitar, piano, etc...). I don't need to do any fancy audio processing to each track, all I need to be able to do is play, pause, and mute/unmute each track.
I had been using multiple instances of AVAudioPlayer but when performing device testing, I noticed that the tracks are playing very slightly out of sync when they are first played. Furthermore, when I pause and play the tracks they continue to get more out of sync. After a bit of research I've realized that AVAudioplayer just has too much latency and won't work for my application.
In my application I basically had an NSArray of AVAudioPlayers that I would loop through and play each one or pause/stop each one, I'm sure this is what caused it to get out of sync on the device.
It seemed like apple's audio mixer would work well for me, but when I try implementing it I get a EXC_BAD_ACCESS error that I can't figure out.
I know the answer is to use OpenAL or audio units but It just seems unnecessary to spend weeks learning about these when all I need to do is play around 5 .mp3 tracks at the same time. Does anyone have any suggestions on how to accomplish this? Thanks
thanks to admsyn's suggestion I was able to come up with a solution.
AVAudioPlayer has a currentTime property that returns the current time of a track and can also be set.
So I implemented the startSynchronizedPlayback as stated by admsyn and then added the following when I stopped the tracks:
-(void) stopAll
{
int count = [tracksArr count];
for(int i = 0; i < count; i++)
{
trackModel = [tracksArr objectAtIndex:i]
if(i = 0)
{
currentTime = [trackModel currentTime]
}
[trackModel stop]
[trackModel setCurrentTime:currentTime]
}
{
This code basically loops through my array of tracks which each hold their own AVAudioPlayer, grabs the current time from the first track, then sets all of the following tracks to that time. Now when I use the startSynchronizedPlayback method they all play in sync, and pausing unpausing keeps them in sync as well. Hope this is helpful to someone else trying to keep tracks in sync.
If you're issuing individual play messages to each AVAudioPlayer, it is entirely likely that the messages are arriving at different times, or that the AVAudioPlayers finish their warm up phase out of sync with each other. You should be using playAtTime: and the deviceCurrentTime property to achieve proper synchronization. Note the description of deviceCurrentTime:
Use this property to indicate “now” when calling the playAtTime: instance method. By configuring multiple audio players to play at a specified offset from deviceCurrentTime, you can perform precise synchronization—as described in the discussion for that method.
Also note the example code in the playAtTime: discussion:
// Before calling this method, instantiate two AVAudioPlayer objects and
// assign each of them a sound.
- (void) startSynchronizedPlayback {
NSTimeInterval shortStartDelay = 0.01; // seconds
NSTimeInterval now = player.deviceCurrentTime;
[player playAtTime: now + shortStartDelay];
[secondPlayer playAtTime: now + shortStartDelay];
// Here, update state and user interface for each player, as appropriate
}
If you are able to decode the files to disk, then audio units are probably the solution which would provide the best latency. If you decide to use such an architecture, you should also check out Novocaine:
https://github.com/alexbw/novocaine
That framework takes a lot of the headache out of dealing with audio units.

Detecting collision, during a CAKeyFrameAnimation

Is it possible to detect the collision of two UIImageViews while one is travelling along a path during a CAKeyFrameAnimation?
If so how is this done, I have tried multiple methods including checking both the CGRects for collision during the animation - but can't find a suitable method for performing a method during a CAKeyFrameAnimation and trying to detect collision of the path and the UIImageView.
You need to get the properties from the presentation layer. It will have the best approximation of information that exists during animation. Access it by
view.layer.presentationLayer
Look at the documentation for CALayer/presentationLayer for more details.
When you want to check for collisions, you would grab the presentationLayer of each object, then access whatever properties you want to test for collision. The exact way to check would depend on which type of layer, and whether you wanted simple hitTest-ing or depth checking. Only you know when and what type of collisions you want to look for.
However, to access the properties of an object while it is animating, you need the presentationLayer.
EDIT
You can make these check whenever you want. You can do it in the context of another action, or with an NSTimer to do it at some interval. You can even use CADisplayLink, which while hook you into the animation timer itself.
If you use CADisplayLink, I suggest setting frameInterval at the highest value possible, and still do what you want, so as to not impact performance.
timer = [CADisplayLink displayLinkWithTarget:self selector:#selector(checkForCollisions)];
// Callback is for every frame, which is 60 times per second.
// Only callback every 6 frames (which is ten times per second)
timer.frameInterval = 6;
[timer addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
Don't forget to invalidate the timer when you are done.

How to seek within an audio track using avassetreader?

I'm familiar with how to stream audio data from the ipod library using AVAssetReader, but I'm at a loss as to how to seek within the track. e.g. start playback at the halfway point, etc. Starting from the beginning and then sequentially getting successive samples is easy, but surely there must be a way to have random access?
AVAssetReader has a property, timeRange, which determines the time range of the asset from which media data will be read.
#property(nonatomic) CMTimeRange timeRange
The intersection of the value of this property and CMTimeRangeMake(kCMTimeZero, asset.duration) determines the time range of the asset from which media data will be read.
The default value is CMTimeRangeMake(kCMTimeZero, kCMTimePositiveInfinity). You cannot change the value of this property after reading has started.
So, if you want to seek to the middle the track, you'd create a CMTimeRange from asset.duration/2 to asset.duration, and set that as the timeRange on the AVAssetReader.
AVAssetReader is amazingly slow when seeking. If you try to recreate an AVAssetReader to seek while the user is dragging a slider, your app will bring iOS to its knees.
Instead, you should use an AVAssetReader for fast forward only access to video frames, and then also use an AVPlayerItem and AVPlayerItemVideoOutput when the user wants to seek with a slider.
It would be nice if Apple combined AVAssetReader and AVPlayerItem / AVPlayerItemVideoOutput into a new class that was performant and was able to seek quickly.
Be aware that AVPlayerItemVideoOutput will not give back pixel buffers unless there is an AVPlayer attached to the AVPlayerItem. This is obviously a strange implementation detail, but it is what it is.
If you are using AVPlayer and AVPlayerLayer, then you can simply use the seek methods on AVPlayer itself. The above details are only important if you are doing custom rendering with the pixel buffers and/or need to send the pixel buffers to an AVAssetWriter.