What does the value at setVolume in AVAudioPlayer do/mean? - objective-c

I am working in Xcode 4.5.1 and developing for the iPhone.
I am using AVAudioPlayer to play sound. I am playing two sounds and want to make a ratio of their average volume relative to each other.
I gather the relevant information using averagePowerForChannel: in combination with NSTimer, checking the volume of both sound files at an interval. However, I have come to the discovery that, regardless of the value I input at setVolume, a specific sound file will always return the same average volume, for example -20,0. I conclude that it does not take into account any volume changes you apply using setVolume.
You can enter values 0-1 at setVolume. Is there a way to convert these values to something meaningful that I can apply to the volume information which I have gathered using averagePowerForChannel:? I am assuming that I can't simply multiply my average volume value with the setVolume value. I have looked in the class reference, but I could not find anything useful.
Please point me in the right direction. Any input is appreciated.

Related

Peak detect and hold in Labview

I've inherited a labview "circuit" that integrates G's to output IPS. The problem is, the output text window (double), at full speed, has numbers scrolling so fast, you can't read them. I only need to see the largest number detected. I'm not too well versed in LabView - Can anyone help me through a function that will display the largest number outputted to the text window for a duration of 1/2 second? I'm basically looking for a peak detect-and-hold function... I'd prefer to work with the double precision value that is constantly updated if possible, rather than the array feeding my integrator. I tried looking through the Functions>signal processing menu, saw one peak detector, but not sure that's the right utility.
Thanks!
Easier to use the Array Max & Min PtByPt.vi, this can be found in the signal processing, point by point menu. Below a VI snippit with how it works.
It will update the maximum value every 10 points. Also attached a waveform chart that shows the values.

Vuforia for Hololens

I was wondering if someone else tried developing a hololens application using vuforia. Specifically, using vuforia's capacity to recognize and track objects.
I tried and it seems like it's working. I was just not sure about the result I got from the Debug.Log that print the name of the tracked object.
I tried putting two trackable targets millimeters away from each other and pointed my Gaze towards the distance between the objects(hoping it takes both).
Some how the output window gave me this.
It seems like I was able to track both targets but I want to know if I tracked two different objects at the same time.
I have this doubt because at some point, eventhough the hololens was in the same position as before, the output started to change and started printing only one of the two objects(the one in the right).
I think of this as a problem caused by hololens' small camera window or by hololens limited hardware.
In the vuforiaconfiguration you should be able to set the maximum simultaneous amount of objects your app can track. You need to make sure it is set to more than 1.
In the image above you see how you can set the maximum amount of tracked images in unity.
If you are not using unity you'll have to access the vuforiaconfiguration in another way and set the maximum simultaneous amount of tracked object there.
From code you can do it in c# like this:
VuforiaConfiguration.Instance.Vuforia.MaxSimultaneousImageTargets = 2;

Accuracy of GpsLocation.Heading on Windows Mobile

I've been use GpsLocation.Heading to get the current direction/orientation of my device.
But it seems not accurate, where the value is either:
keep changing with a huge difference even I just put the device on the table without moving it
or the value didn't change even I move around and rotate the device.
means, it was very inconsistent and inaccurate.
As, I believe, from the documentation, Heading suppose to give me the value(in degrees) of where my device currently heading.
Does anybody know how to use it properly, or how to get the accurate value?
This heading will only be accurate when you move.
Look for a magnetic field sensor (compass) (if any) in the device and then use that.

Indoor positioning

I am trying to get indoor gps by trying to orient my floorplan with the actual building from google maps. I know perfect accuracy is not possible. Any idea how to do this ? Do the maps need to be converted to kml format?
Forget that!
Only with luck you can get indoor GPS signals, probably only near the window, and then it is likely to be more distorted than the size of your building.
You only can try to get the coordinates outside, at the corner of the buildings.
For precise measures you would need some averaging of the measures, which only a few GPS devices offer. For less precision, take the coordinate, or measure it on differnet hours, days.
Otherwise, you should think about geolocation using Wifi/HF and any other wireless/radio sources that you can precisely locate since you probably install it yourself or at least someone from your company/service is responsible of them and could give you the complete list with coordinates. Then, once you've got the radio location, you can geolocate the devices using radio propagation and location.
I know that's not the answer you were looking for, but think about it as an alternate one if you really need to locate people inside your building.
PS: I did it at work and it works pretty well (except some areas where radio emitter are broken).

Algorithm for reducing GPS track data to discard redundant data?

We're building a GIS interface to display GPS track data, e.g. imagine the raw data set from a guy wandering around a neighborhood on a bike for an hour. A set of data like this with perhaps a new point recorded every 5 seconds, will be large and displaying it in a browser or a handheld device will be challenging. Also, displaying every single point is usually not necessary since a user can't visually resolve that much data anyway.
So for performance reasons we are looking for algorithms that are good at 'reducing' data like this so that the number of points being displayed is reduced significantly but in such a way that it doesn't risk data mis-interpretation. For example, if our fictional bike rider stops for a drink, we certainly don't want to draw 100 lat/lon points in a cluster around the 7-Eleven.
We are aware of clustering, which is good for when looking at a bunch of disconnected points, however what we need is something that applies to tracks as described above. Thanks.
A more scientific and perhaps more math heavy solution is to use the Ramer-Douglas-Peucker algorithm to generalize your path. I used it when I studied for my Master of Surveying so it's a proven thing. :-)
Giving your path and the minimum angle you can tolerate in your path, it simplifies the path by reducing the number of points.
Typically the best way of doing that is:
Determine the minimum number of screen pixels you want between GPS points displayed.
Determine the distance represented by each pixel in the current zoom level.
Multiply answer 1 by answer 2 to get the minimum distance between coordinates you want to display.
starting from the first coordinate in the journey path, read each next coordinate until you've reached the required minimum distance from the current point. Repeat.