Get GPS data from MOV (quicktime) video file - gps

Please help to get GPS track with time from .mov file.
The file is fom car camera and consists GPS data because its viewer shows car position.
What is right way to do that?

You don't say if you're looking for a programming solution to parse the file and read the GPS metadata yourself, or whether you're looking for a tool that will display the data.
It also depends very much on the specific camera that recorded the file as they embed data in different formats. If you have an iPhone, for example, it records GPS data in a mdta metadata atom with key "com.apple.quicktime.location.ISO6709", but other formats exist too, especially if you mean real time varying GPS data embedded in each frame, rather in the header for the movie as a whole.
Tools that will read such data from the movie header include ExifTool and CatDV (though the latter is a commercial product).

I found that ffprobe from the ffmpeg project was able to extract the com.apple.quicktime.location.ISO6709 tag.

Related

"Live" data capable alternative for Google Earth KML

I'm currently using Google Earth + KML Files to visualize Aircraft Flightpaths in 3d, it works perfect and also looks fine, but the big disadvantage is, that there seems to be no way to feed "live" data to Google Earth and draw the Flightpaths in Realtime.
Is there an alternative that is capable to display live data without manually reloading a file or anything like this? Satellite Picture surface would be an absolute MUST.
Maybe someone out there knows a proper solution for my project.
Thanks
The KML NetworkLink tag provides several ways to automatically update/reload a KML file, which will let you provide "live" data. You can either make the NetworkLink update the KML every time the user stops moving the map (with a setable delay), or on a timer (eg: every 10 seconds). Look at the KML Reference and developer tutorials for more info.

extracting flv video from RAM dump

I want to extract some FLV video from a RAM dump. Is there an easy way to do that using some good tool you know about?
I've Googled but I found nothing!
I have an idea to do that but it seems difficult and time-consuming, I'll search for FLV magic number and start to extract the data incrementally from there, but I don't know if this method works actually.
The Macintosh FileJuicer program does what you describe (search binary blobs for magic numbers and extract), it would search the RAM dump for all magic numbers it knows about, I'm not sure if there's an equivelant for other OSes perhaps one that's open source, but at least it's one more google search term.

Write KML Extended Data in a different way

I have some GPS raw data that I want to put on a KML file.
Currently I can generate the KML file with the Extended Data using the KML format described here https://developers.google.com/kml/documentation/kmlreference#trackexample and that's fine, but it takes too much time.
I am collecting six different types of extended data, using an Arduino and writing them on a SD card, but the entire writing process for each sample is too slow (I write the data on six different files and then I append each file to the final KML, using the gx:track element).
Is there any other way to write all six parameters at the same time, in the KML format using the Extended Data ? maybe using different tags or same tags in different order?
I don't have enough cpu power to rework the file after collecting gps raw data, so I need to write it right the first time.
write the kml totally yourself, do not use an library. Then it is as fast as simply writing text to a file. if the bottleneck is the file system, then kml is not the right format. Use a custom binary file, and transform later to kml on server side.

NAudio - Create software beat machine / sampler - General Strategy

Im new to NAudio but the goal of my project is to provide the user with the ability for the user to listen to an MP3 and then select a sample or a "chunk" of that song as a sample which could be saved to disk. These samples would be able to replayed at the same time (i.e. not merged but played at the same time).
Could someone please let me know what the overall strategy required to achieve this (....not necessarily the specifics...almost like pseduo code....).
For example would the samples / chunks of a song need to be saved as a WAV file. And these samples could be played together in the WAV format, etc.
I have seen a few small examples of a few implementations of some of the ideas Ive mentioned above but dont have a good sense of the big picture just yet.
Thanks in advance,
Andrew
The chunks wouldn't need to be saved as WAV files unless you were keeping them for future use. You can store the PCM audio (Mp3FileReader automatically converts to PCM) in a byte array and use RawSourceWaveStream to play them.
As for mixing them, I'd recommend using the MixingSampleProvider. This does mean you need to convert your RawSourceWaveStream to IEEE float, but you can use Pcm16BitToSampleProvider to do this. This will give the advantage that you can adjust volumes (and do other DSP) easily on the samples you are mixing. MixingSampleProvider also auto-removes completed inputs, so you can just add new inputs whenever you want to trigger a sound.

Capture and play new media types with AVFoundation

I've got a very non-standard AVFoundation question and as a relative newbie to the iOS world I could really use some guidance from the experts out there -
I'm working on an app that lets the user record bits of audio which I need to programmatically arrange using AVMutableComposition. Here's the thing, in addition to the audio track I want to capture and save accelerometer data and have it synced with the sound. Typically AVFoundation is used for known media types like still photos, audio, and video but I was wondering whether it's feasible to capture something like accelerometer data using this framework. This would make it much easier for me to sync the sensor data with the captured audio, especially when putting the parts together with AVMutableComposition.
Here is what I need to accomplish:
Record accelerometer data as an AVAsset/AVAssetTrack so I can insert it into an AVMutableComposition
Allow for playback of the accelerometer data in a custom view alongside the audio it was recorded with
Save the AVMutableComposition to disk, including both audio and accelerometer tracks. It would be nice to use a standard container like Quicktime
For part 1 & 3 I'm looking at using AVAssetReader, AVAssetReaderOutput, AVAssetWriter, AVAssetWriterInput classes to capture from the accelerometer but without much experience with Cocoa I'm trying to figure out exactly what I need to extend. At this point I'm thinking I need to subclass AVAssetReaderOutput and AVAssetWriterInput and work with CMSampleBuffers to allow the conversion between the raw accelerometer data and an AVAsset. I've observed that most of these classes only have a single private member referencing a concrete implementation (i.e. AVAssetReaderInternal or AVAssetWriterInputInternal). Does anyone know whether this a common pattern or what it means for writing a custom implementation?
I haven't yet given part 2 much thought. I'm currently using an AVPlayer to play the audio but I'm not sure how to have it dispense sample data from the asset to my custom accelerometer view.
Apologies for such an open ended question - I suppose I'm looking more for guidance than a specific solution. Any gut feelings as to whether this is at all possible with AVFoundations architecture?
I would use a NSmutableArray and store the accelerator data plus the time code there. So when you play back you can get the time from the video player and use that look up the accelerator data in the array. Since the data is store as a time line you don't have to search the whole array. It is enough to step forward in the array and check when the time data coincide.