Memory Leak on UWP MediaPlayer (Windows.Media.Playback.MediaPlayer) - xaml

I am maintenancing a WPF application. I added a UWP nedia player on my project. But, memory usage is too high. I realized that UWP media player did it, so I created a reproducible code.
while (true)
{
var mp = new MediaPlayer()
{
Source = MediaSource.CreateFromUri(new Uri("Test.mp4"))
};
Thread.Sleep(1000);
mp.Play();
Thread.Sleep(1000);
mp.Dispose();
}
This code occurs a memory leak. I created MediaPlayer and Disposed it! But, its memory usage grows up infinitely.
How can I catch memory leak on this code?
This is .NET Core 3.0 project. (XAML islands with WPF) I didn't test that if it occurs in pure UWP project, yet.
Someone says that it is natural because it is a loop. But, below code doesn't make any memory leak because GC works. (Of course, some (but limitative) references will be not collected.)
while (true)
{
new SomeClass();
}

It is absolutely a bug of Windows 10 19H1. Because built-in app (Movie and TV) has same memory leak issue. To reproduce this, just repeat that open video file and close it.

The way your code is written memory will bloat and grow until you run out of memory. I verified also in pure UWP. If you make the following two changes you will find that the memory will remain stable and the system will reclaim all memory after each loop:
Dispose also of the MediaSource object you create and assign to the Source property
Don't run this in a tight loop, instead invoke yourself as a dispatcher action
Here is the code (tested in UWP) that doesn't show any leak. In WPF the Dispatcher call would look slightly different:
private async void PlayMedia()
{
var ms = MediaSource.CreateFromUri(new Uri("ms-appx:///Media1.mp4"));
var mp = new MediaPlayer()
{
Source = ms
};
Thread.Sleep(1000);
mp.Play();
Thread.Sleep(1000);
mp.Dispose();
ms.Dispose();
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, new DispatchedHandler(PlayMedia));
}
As a side note: the "SomeClass" comparison you mentioned isn't exactly an apples-to-apples to comparison if SomeClass is a pure managed code class, as the objects you are creating here are complex native Windows Runtime objects that only have a thin managed code wrapper around them.
Tested also now in WPF: I reproduced the original memory growth issue, then applied the suggested changes and verified that the memory no longer grows. Here is my test project for reference: https://1drv.ms/u/s!AovTwKUMywTNuLk9p3frvE-U37saSw
Also I ran your shared solution with the WPF app packaged as a Windows App Package and I am not seeing a leak on the latest released version of Windows 10 (17763.316). Below is a screenshot of the memory diagnostics after running your solution for quite a while. If this is specific to the insider build you are running, please log a bug via Feedback Hub. I think at this point we should close this question as answered.

Related

How to debug a native OpenGL crash in managed code?

I am currently writing a game rendering engine using LWJGL 3 and Kotlin. Everything works fine for multiple minutes, until out of nowhere the program exits with the following message:
Process finished with exit code -1073740940 (0xC0000374)
All I do is load a few models, and then render them with glDrawElements(...) in the main loop - nothing else is loaded or changed.
Now, I know that this error code means heap corruption, but I do not even get a hs_err_pid logfile and the Java Debugger just crashes with the program. So how would I go about debugging such a crash? Could this be because of an incompatibility with Kotlin?
So, for everyone who may find themselves in a similar situation: Thanks to the LWJGLX debug tool by Kai Burjack, I instantly found what crashed the program.
What I did was the following: In the shader class, when uploading a matrix, I allocated a managed FloatBuffer, which I then accidentally tried to free manually:
val buf = BufferUtils.createFloatBuffer(16)
matrix.get(buf)
glUniformMatrix4fv(location, false, buf)
MemoryUtil.memFree(buf)
The MemoryUtil.memFree() call actually doesn't crash, but as the matrix changes every frame, this method corrupts the heap over time - hence the crash after a few minutes.
I attached the LWJGLX debugger and now my program crashed instantly - with a precise error message telling me that I am trying to free a Memory Region i did not allocate with memAlloc() - So after changing my code to
val buf = MemoryUtil.memAllocFloat(16)
...
MemoryUtil.memFree(buf)
everything now works. I can only recommend the LWGLX debugger - It also found some memory leaks I now have to fix ;-)

Same code won't work (kind of) in a shared library, but works when used directly in the program

I created a scripting language, when it worked perfectly, I put all the code in a shared library, and made a wrapper for it, but the same code won't work in the shared library. I've noticed that the code runs faster in the shared library, but it always crashes, due to memory problems, saying that the index is out of array length, but the very same code runs outside the library perfectly.
I've also noticed that if I reduce the amount of work it has to do, it lasts a bit longer before crashing.
My question here is that what is causing this crash, and how do I stop it from happening?
P.S: I haven't included all code, because the whole code is of 1039 lines (but if you need the code to solve the problem, then I could link to it), but I have tracked the crash to a function. And the confusing this is, that function always crashes on the 821st time it's called, never before, that's for a more optimized code, when the code was not optimized, and used more CPU, it would crash at 702.
Plus: I'm using DMD2, and the functions are exported using extern(C), And I'm testing all this on a Linux system, Ubuntu 14.04. And this is how I compile the library:
dmd -debug -gc "qscript.d" "qcompiler.d" "lists.d" "dllmain.d" "-shared" "-odobj/Debug" "-of/home/nafees/Desktop/Projects/QScr/QScr/bin/Debug/libQScr.so" -w -vcolumns
And is loaded using the dlopen function.
Again if you missed my question: what is causing this crash, and how do I stop it from happening? EDIT: and how can I disable the garbage collector, gc.disable doesn't work, gc is undefined.
EDIT: I have tracked 'why' the crash is occurring, I put up debug code all over the files, just to find out that the garbage collector is messing with the script file that was loaded in the memory. I 'fixed' the problem, not actually, by adding a check. It checks if the script is not 'alright', it reloads it into the memory. This is avoiding the crash, but the problem still exists. This changes the question to:
How can I disable the garbage collector> BTW, I tried gc.disable, but the DMD says that gc is undefined.
You must initialize the runtime when you load your shared library for the first time. To do so you need to add something like that to your library:
private __gshared bool _init = false;
import core.runtime: rt_init, rt_term;
export extern(C) void init()
{
if (!_init) rt_init;
}
export extern(C) void terminate()
{
if (_init) rt_term;
_init = false;
}
I really mean like that and not exactly that. Since we don't know how your scripting engine is used an init counter might also be valid:
private __gshared uint _init;
import core.runtime: rt_init, rt_term;
export extern(C) void init()
{
if (!_init) rt_init;
++init;
}
export extern(C) void terminate()
{
--init;
if (!_init) rt_term;
}
Anyway you should get the idea. The GC was undefined because you don't initialize the low level D runtime.
Solved the problem myself. AS I said in the Question's edit: I tracked the problem to the garbage collector, the garbage collector was messing with the script file that was loaded into the memory, that caused the library to crash, because the garbage collector had removed the script's contents from the memory. To solve this, I added:
import core.memory;
...
GC.disable();
This solved the whole problem.

MonoMac create context in background thread

This is the error I am getting:
MonoMac.AppKit.AppKitThreadAccessException: AppKit Consistency error: you are calling a method that can only be invoked from the UI thread.
I want to lay out my program as shown in the apple documentation figure 14-1.
The following stack-overflow question seems to suggest this can be achieved in cocoa
The documentation seems to state that multiple gl context are perfectly plausible, so I'm guessing that at least some of these must exist outside of the main UI thread.
I am guessing that this could well be the problem. However I want to make sure that a nsglcontext in a separate thread is not implicitly dangerous and that one just has to follow the usual precautions one does when working with multi-threaded opengl programs.
Any help would save my table from being head-butted any more and thus would be greatly appreciated.
As suggested in the blog post link in the question you can use the following to turn of cross thread ui checks.
//
// Disable UIKit thread checks for a couple of methods
//
var previous = UIApplication.CheckForIllegalCrossThreadCalls;
UIApplication.CheckForIllegalCrossThreadCall = false;
// Perform some UIKit calls here
foo.Bar = 1;
// Restore
UIApplication.CheckForIllegalCrossThreadCalls = previous;
Do note though if you are doing the wrong thing this would also hide the problem, so use this sparingly.

Can Vertex Array Objects (VAOs) be shared across EAGLContexts in OpenGL ES?

Spoiler: I'm fairly confident that the answer is NO, but that's only after a day of very frustrated debugging. Now I would like to know if that is indeed the case (and if so, how I might have known), or if I'm just doing something completely wrong.
Here's the situation. I'm using OpenGL ES 2.0 to render some meshes that I load from various files (.obj, .md2, etc.). For the sake of performance and User Experience, I delegate the actual loading of these meshes and their associated textures to a background thread using GCD.
Per Apple's instructions, on each background thread, I create and set a new EAGLContext with the same shareGroup as the main rendering context. This allows OpenGL objects, like texture and buffer objects, that were created on the background thread to be immediately used by the context on the main thread.
This has been working out splendidly. Now, I recently learned about Vertex Array Objects as a way to cache the OpenGL state associated with rendering the contents of certain buffers. It looks nice, and reduces the boilerplate state checking and setting code required to render each mesh. On top of it all, Apple also recommends using them in their Best Practices for Working with Vertex Data guide.
But I was having serious issues getting the VAOs to work for me at all. Like I do with all loading, I would load the mesh from a file into memory on a background thread, and then generate all associated OpenGL objects. Without fail, the first time I tried to call glDrawElements() using a VAO, the app crashes with EXC_BAD_ACCESS. Without the VAO, it renders fine.
Debugging EXC_BAD_ACCESS is a pain, especially when NSZombies won't help (which they obviously won't), but after some time of analyzing captured OpenGL frames, I realized that, while the creation of the VAO on the background thread went fine (no GL_ERROR, and a non-zero id), when the time came to bind to the VAO on the main thread, I would get a GL_INVALID_OPERATION, which the docs state will happen when attempting to bind to a non-existent VAO. And sure enough, when looking at all the objects in the current context at the time of rendering, there isn't a single VAO to be seen, but all of the VBOs that were generated with the VAO AT THE SAME TIME are present. If I load the VAO on the main thread it works fine. Very frustrating.
I distilled the loading code to a more atomic form:
- (void)generate {
glGenVertexArraysOES(1, &_vao);
glBindVertexArrayOES(_vao);
_vbos = malloc(sizeof(GLuint) * 4);
glGenBuffers(4, vbos);
}
When the above is executed on a background thread, with a valid EAGLContext with the same shareGroupas the main context, the main context will have 4 VBOs, but no VAO. If I execute it on the main thread, with the main context, it will have 4 VBOs, and the VAO. This leads me to the conclusion that there is some weird exception to the object-sharing nature of EAGLContexts when dealing with VAOs. If that were actually the case, I would have really expected the Apple docs to note that somewhere. It's very inconvenient to have to discover little tidbits like that by hand. Is this the case, or am I missing something?
According to this, OpenGL-ES explicitly disallows sharing of VAO objects:
Should vertex array objects be sharable across multiple OpenGL ES
contexts?
RESOLVED: No. The OpenGL ES working group took a straw-poll and
agreed that compatibility with OpenGL and ease of implementation
were more important than creating the first non-shared named object
in OpenGL ES.
As you noted, VBOs are still shareable, so you just have to create a VAO for each context that binds the shared VBO.

Creating a JVM per JNI_CreateJavaVM, receiving a OutOfMemoryError

I'm creating a JVM out of a C++-program per JNI, and the creation itself works fine. The communication with the JVM works also fine; I am able to find classes, create objects, call methods and so on. But one of my methods needs quite a lot of memory, and the JVM throws a OutOfMemoryError when calling it. Which I don't understand, as there is more than one GB of free RAM available. The whole process uses about 200MB and it seems that it doesn't even try to allocate more; it sticks at 200MB and then the exceptions is thrown.
I tried to pass the -Xmx-option to the JVM, but it won't work when the JVM is created through JNI. As far as I understood, a JVM created through JNI should be able to access all the memory available, making the -Xmx-options unnecessary - but obviously this assumption is wrong.
So the question is, how can I say the JVM that it just should use as much as memory as it needs?
System: MacOS 10.6
Creation of the JVM:
JNIEnv *env;
JavaVMInitArgs vm_args;
JavaVMOption options;
//Path to the java source code
options.optionString = jvm_options; // setting the classpath
vm_args.version = JNI_VERSION_1_6; //JDK version. This indicates version 1.6
vm_args.nOptions = 1;
vm_args.options = &options;
vm_args.ignoreUnrecognized = 0;
int ret = JNI_CreateJavaVM(jvm, (void**)&env, &vm_args);
if(ret < 0)
printf("\nUnable to Launch JVM\n");
Seems like I got something wrong with the -Xmx-option - tried it again and it works now.