I am building a universal app for timelapse that captures a sequence of photos at a specified interval. In my timer tick event, I am capturing an image and saving it to a storage file like this:
StorageFile file = await appFolder.CreateFileAsync(IMAGE_FILE_ROOT, Windows.Storage.CreationCollisionOption.GenerateUniqueName);
ImageEncodingProperties imageProperties = ImageEncodingProperties.CreateJpeg();
await MediaCaptureManager.CapturePhotoToStorageFileAsync(imageProperties, file);
ImageFilePaths.Add(file.Path);
file = null;
After successfully capturing around 30 images at the highest resolution (140 on low res) on my phone, I get an Out of Memory exception on the CapturePhotoToStorageFileAsync method.
I tried just taking the photo to an InMemoryRandomAccessStream so that I could elliminate the StorageFile API for the leak, and it still leaks.
I profiled the memory utilization with the WinPhone Power Tools and it is a constant rise while photos are being taken.
Is there anything I can do to work around this?
Update:
Here is test code that demonstrates the leak:
for (int x = 0; x < 40; x++)
{
using (IRandomAccessStream memoryStream = new InMemoryRandomAccessStream())
{
await MediaCaptureManager.CapturePhotoToStreamAsync(imageProperties, memoryStream);
}
await Task.Delay(1000);
}
So I figured out how to work around the issue!
Apparently the memory leak has something to do with the audio driver. If you initialize the MediaCaptureManager like this, the leak disappears.
var mediaSettings = new MediaCaptureInitializationSettings
{
PhotoCaptureSource = PhotoCaptureSource.Auto,
StreamingCaptureMode = StreamingCaptureMode.Video,
AudioDeviceId = string.Empty
};
Related
Environment
OS-X 10.10
xcode 6.4
C++/Obj-C OS-X application
Use-case
Capture video using CoreMediaIO, capture source is iPod5
Capturing machine is OS-X Yosemite
Capture feed consists of Video and Audio samples
Problem description
While video capture is working fine, when video samples are received, there is an accumulating memory leak, when no video samples are received ( only audio ), there is no leak ( memory consumption stops growing )
I am mixing Cocoa thread and POSIX threads, I have made sure to have [NSThread isMultiThreaded] set to YES ( by creating an empty NSThread )
"Obj-C Auto Ref Counting" is set to YES in the project properties
The following is a short code-snap of the code causing the leak:
OSStatus status = 0;
#autoreleasepool {
m_udid = udid;
if (0 != (status = Utils::CoreMediaIO::FindDeviceByUDID(m_udid, m_devId)))
return HRESULT_FROM_WIN32(ERROR_NOT_FOUND);
if (0 != (status = Utils::CoreMediaIO::GetStreamByIndex(m_devId, kCMIODevicePropertyScopeInput, 0, m_strmID)))
return HRESULT_FROM_WIN32(ERROR_NOT_FOUND);
status = Utils::CoreMediaIO::SetPtopertyData(m_devId, kCMIODevicePropertyExcludeNonDALAccess, 1U);
status = Utils::CoreMediaIO::SetPtopertyData<int>(m_devId, kCMIODevicePropertyDeviceMaster, getpid());// Exclusive access for the calling process
// Results in an infinitely accumulating memory leak
status = CMIOStreamCopyBufferQueue(m_strmID, [](CMIOStreamID streamID, void* token, void* refCon) {
#autoreleasepool {
CMSampleBufferRef sampleBuffer;
while(0 != (sampleBuffer = (CMSampleBufferRef)CMSimpleQueueDequeue(m_queueRef))) {
CFRelease(sampleBuffer);
sampleBuffer = 0;
}
}
}, this, &m_queueRef);
if(noErr != status)
return E_FAIL;
if(noErr != (status = CMIODeviceStartStream(m_devId, m_strmID)))
return E_FAIL;
}
Having sample de-queuing done in the main thread ( using 'dispatch_async(dispatch_get_main_queue(), ^{' ) didn't have any affect...
Is there anything wrong with the above code snap? might this be an OS Bug?
Reference link: https://forums.developer.apple.com/message/46752#46752
AN UPDATE
The QuickTime player support using an iOS device as a capture source ( mirroring it's A/V to the mac machine ), having a preview session running for a while reproduce the above mentioned problem w/ the OS provided QuickTime player, this strongly indicate an OS Bug, bellow is a screen-shot showing the QT player taking 140Mb of RAM after running for ~2hours ( where it starts around 20Mb ), by the end of the day it has grown to ~760Mb...
APPLE Please have this Fixed, I have standing customers commitments...
In my project i have recorded sound using mediaplayer and save as .3gp file but when i want to play it using some audio effect or fast forwarding or change pitch of audio while playing. i have used mediaplayer but not working.then i used audiotrack but audiotrack takes only bytestream as input to play. i just want to play .3gp file and change pitch while playing.. i use this one below.
Help me...thanks in advance...
public void play() {
File path = new File(
Environment.getExternalStorageDirectory().getAbsolutePath()
+ "/sdcard/meditest/");
File[] f=path.listFiles();
isPlaying=true;
int bufferSize = AudioTrack.getMinBufferSize(outfrequency,
channelConfigurationout, audioEncoding);
short[] audiodata = new short[bufferSize];
try {
DataInputStream dis = new DataInputStream(
new BufferedInputStream(new FileInputStream(
f[0])));
audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC, outfrequency,
channelConfigurationout, audioEncoding, bufferSize,
AudioTrack.MODE_STREAM);
audioTrack.setPlaybackRate((int) (frequency*1.5));
AudioManager audioManager = (AudioManager)this.getSystemService(Context.AUDIO_SERVICE);
// Set the volume of played media to maximum.
audioTrack.setStereoVolume(1.0f,1.0f);
Log.d("Clapper","player start");
audioTrack.play();
while (isPlaying && dis.available() > 0) {
int i = 0;
while (dis.available() > 0 && i < audiodata.length) {
audiodata[i] = dis.readShort();
i++;
if(i/50==0)
Log.d("Clapper", "playing now"+i);
}
audioTrack.write(audiodata, 0, audiodata.length);
}
Log.d("Clapper","AUDIO LENGTH: "+String.valueOf(audiodata));
dis.close();
audioTrack.stop();
} catch (Throwable t) {
Log.e("AudioTrack", "Playback Failed");
}
Log.d("Clapper","AUDIO state: "+String.valueOf(audioTrack.getPlayState()));
talkAnimation.stop();
if(audioTrack.getPlayState()!=AudioTrack.PLAYSTATE_PLAYING)
{
runOnUiThread(new Runnable() {
public void run() {
// TODO Auto-generated method stub
imgtalk.setBackgroundResource(R.drawable.talk1);
}
});
}
}
I tried library called Sonic. Its basically for Speech as it use PSOLA algo to change pitch and tempo.
Sonic Library
i got your problem .Media player does not support cha
Consider using a SoundPool
http://developer.android.com/reference/android/media/SoundPool.html
It supports changing the pitch in realtime while playing
The playback rate can also be changed. A playback rate of 1.0 causes the sound to play at its original frequency (resampled, if necessary, to the hardware output frequency). A playback rate of 2.0 causes the sound to play at twice its original frequency, and a playback rate of 0.5 causes it to play at half its original frequency. The playback rate range is 0.5 to 2.0.
Once the sounds are loaded and play has started, the application can trigger sounds by calling SoundPool.play(). Playing streams can be paused or resumed, and the application can also alter the pitch by adjusting the playback rate in real-time for doppler or synthesis effects.
http://developer.android.com/reference/android/media/SoundPool.html#setRate(int, float)
IF you want to change pitch while playing sound you have to use sound pool .this is the best way to do this.you can fast forward your playing by some amount and see you feel that pitch has been changed.
I am currently developing an app on iOS that reads IMA-ADPCM audio data in over through a TCP socket and converts it to PCM and then plays the stream. At this stage, I have completed the class that pulls (or should I say reacts to pushes) in the data from the stream and decoded it to PCM. I have also setup the Audio Queue class and have it playing a test tone. Where I need assistance is the best way to pass the data into the Audio Queue.
The audio data comes out of the ADPCM decoder as 8 Khz 16bit LPCM at 640 bytes a chunk. (it originates as 160 bytes of ADPCM data but decompresses to 640). It comes into the function as uint_8t array and passes out an NSData object. The stream is a 'push' stream, so everytime the audio is sent it will create/flush the object.
-(NSData*)convertADPCM:(uint8_t[]) adpcmdata {
The Audio Queue callback of course is a pull function that goes looking for data on each pass of the run loop, on each pass it runs:
-(OSStatus) fillBuffer: (AudioQueueBufferRef) buffer {
I've been working on this for a few days and the PCM conversion was quite taxing and I am having a little bit of trouble assembling in my head the best way to bridge the data between the two. It's not like I am creating the data, then I could simply incorporate data creation into the fillbuffer routine, rather the data is being pushed.
I did setup a circular buffer, of 0.5 seconds in a uint16_t[] ~ but I think I have worn my brain out and couldn't work out a neat way to push and pull from the buffer, so I ended up with snap crackle pop.
I have completed the project mostly on Android, but found AudioTrack a very different beast to Core-Audio Queues.
At this stage I will also say I picked up a copy of Learning Core Audio by Adamson and Avila and found this an excellent resource for anyone looking to demystify core audio.
UPDATE:
Here is the buffer management code:
-(OSStatus) fillBuffer: (AudioQueueBufferRef) buffer {
int frame = 0;
double frameCount = bufferSize / self.streamFormat.mBytesPerFrame;
// buffersize = framecount = 8000 / 2 = 4000
//
// incoming buffer uint16_t[] convAudio holds 64400 bytes (big I know - 100 x 644 bytes)
// playedHead is set by the function to say where in the buffer the
// next starting point should be
if (playHead > 99) {
playHead = 0;
}
// Playstep factors playhead to get starting position
int playStep = playHead * 644;
// filling the buffer
for (frame = 0; frame < frameCount; ++frame)
// framecount = 4000
{
// pointer to buffer
SInt16 *data = (SInt16*)buffer->mAudioData;
// load data from uint16_t[] convAudio array into frame
(data)[frame] = convAudio[(frame + playStep)];
}
// set buffersize
buffer->mAudioDataByteSize = bufferSize;
// return no Error - Osstatus will return an error otherwise if there is one. (I think)
return noErr;
}
As I said, my brain was fuzzy when I wrote this, and there's probably something glaringly obvious I am missing.
Above code is called by the callback:
static void MyAQOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
soundHandler *sHandler = (__bridge soundHandler*)inUserData;
CheckError([sHandler fillBuffer: inCompleteAQBuffer],
"can't refill buffer",
"buffer refilled");
CheckError(AudioQueueEnqueueBuffer(inAQ,
inCompleteAQBuffer,
0,
NULL),
"Couldn't enqueue buffer (refill)",
"buffer enqued (refill)");
}
On the convAudio array side of things I have dumped the it to log and it is getting filled and refilled in a circular fashion, so I know at least that bit is working.
The hard part in managing rates, and what to do if they don't match. At first, try using a huge circular buffer (many many seconds) and mostly fill it before starting the Audio Queue to pull from it. Then monitor the buffer level to see his big a rate matching problem you have.
In my WP7 application I have downloaded 200 images from Web and saved in isolated storage .When debug all the images are loaded in panorama view by queue method and I can view when it is connected to pc. after disconnect it from pc when i open the application and navigate the images it shows some images and terminated.
if (i < 150)
{
WebClient m_webClient = new WebClient();
Uri m_uri = new Uri("http://d1mu9ule1cy7bp.cloudfront.net/2012//pages/p_" + i + "/mobile_high.jpg");
m_webClient.OpenReadCompleted += new OpenReadCompletedEventHandler(webClient_OpenReadCompleted);
m_webClient.OpenReadAsync(m_uri);
}
void webClient_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e)
{
int count;
try
{
Stream stream = e.Result;
byte[] buffer = new byte[1024];
using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication())
{
//isf.Remove();
using (System.IO.IsolatedStorage.IsolatedStorageFileStream isfs = new IsolatedStorageFileStream("IMAGES" + loop2(k) + ".jpg", FileMode.Create, isf))
{
count = 0;
while (0 < (count = stream.Read(buffer, 0, buffer.Length)))
{
isfs.Write(buffer, 0, count);
}
stream.Close();
isfs.Close();
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
I think that your problem is that if you load too many images at once in a loop the moment you go out of the loop and give focus back to the UI thread all the Garbage Collection on the bitmap images is done.
This article explains it a bit better and provides with a solution.
I also had this problem and came up with my own solution. I had a dictonairy with image url that needed to be loaded, but you can easily alter this for your scenario.
This SO question is also about this problem (loading multiple images and crash (Exception)). It also has Microsofts response to it, I based my solution on their response.
In my solution I use the dispatcher to return to the UI thread and thus making sure the garbage of the image and bitmaps used was cleaned.
private void LoadImages(List<string> sources)
{
List<string>.Enumerator iterator = sources.GetEnumerator();
this.Dispatcher.BeginInvoke(() => { LoadImage(iterator); });
}
private void LoadImage(List<string>.Enumerator iterator)
{
if (iterator.MoveNext())
{
//TODO: Load the image from iterator.Current
//Now load the next image
this.Dispatcher.BeginInvoke(() => { LoadImage(iterator); });
}
else
{
//Done loading images
}
}
After talking on Skype I reviewed his code and found out his problem was with his Isolated Storage Explorer. It couldnt connect to his pc so it gave an error. Had nothing to do with the image loading.
I'd be very wary of the memory implications of loading 200 images at once. Have you been profiling the memory usage? Using too much memory could cause your application to be terminated.
I'm trying to build an application using AIR 2's new NativeProcess API's going from Brent's little video:
http://tv.adobe.com/watch/adc-presents/preview-command-line-integration-in-adobe-air-2
but I'm having some issues, namely I get an error every time I try to start my process.
I am running OS X 10.5.8 and I want to run diskutil and get a list of all mounted volumes.
Here is the code I am trying:
private function unmountVolume():void
{
if(!this.deviceMounted){ return; }
// OS X
if (Capabilities.os.indexOf("Mac") == 0){
diskutil = new NativeProcess();
// TODO: should really add event listeners
// in case of error
diskutil.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, onDiskutilOut);
var startupInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
startupInfo.executable = new File('/usr/sbin/diskutil');
var args:Vector.<String> = new Vector.<String>();
args.push("list");
//args.push(this.currentVolumeNativePath);
startupInfo.arguments = args;
diskutil.start(startupInfo);
}
}
which seems pretty straightforward and was based off of his grep example.
Any ideas of what I'm doing wrong?
The issue was that the following line was not added to my descriptor:
<supportedProfiles>extendedDesktop</supportedProfiles>
That should really be better documented :) It wasn't mentioned in the video.