In my app I am rendering a Video generated from images I retrieve from the users photos. I have set up an AVAssetwriter with a AVAssetwriterInput has an AVAssetWriterInputPixelBufferAdaptor. I'm able to transform the ALAsset objects I retrieve from the users library to CVPixelBuffers and add them to the Video, which then is saved as an mp4. Adding all the images to the video is done on a background thread which sends a Notification to the main thread every frame, so the interface can be updated. All this works well, and I get a usable Movie file out of the app.
My problem now is, that when the user enters another application, after becoming active again the status of the ALAssetWriter changes to "failed", an I am not able to add any more images to the movie file. First I thought I might have to end the current Session on the writer and reopen a new one, once the app has become active again, but that doesn't seem to help.
I was just wondering how the general approach would be when I'd like the user to enter other applications. The best solution would be, if the rendering could continue in the background. I suppose I'd need a background thread from the UIApplication. But for now I'd be happy, if rendering could just continue after resuming my app.
I won't post any code for now, because it's really a lot, and my question possibly is conceptual. If you need to see code, I'll post it.
Edit 1:
Tested on iOS 4.3 and iOS 5. I've seen background rendering on other apps such as iTimelapse, but I'm not sure which frameworks they use.
Edit2:
I now have the information of an apple devforum member, that AVAssetWriter does not work in the background. So is there any other framework out there capable of rendering quicktime videos?
Turns out that AVAssetWriter just won't survive the app being suspended. You can add an extra 10 Minutes of rendertime by requesting background time, but after that the AssetWriter fails. Same happens if you use certain services on the phone. For example making or answering a call will make the AVAssetWriter fail as well.
If there are any OpenGL calls made when your app is in the background then that would explain this behaviour, looks fairly likely. From the OpenGL ES Pragramming Guide
Background Applications May Not Execute Commands on the Graphics Hardware
An OpenGL ES application is terminated if it attempts to execute
OpenGL ES commands on the graphics hardware. This not only refers to
calls made to OpenGL ES while your application is in the background,
but also refers to previously submitted commands that have not yet
completed. The main reason for preventing background applications from
processing OpenGL ES commands is to make the graphics processor
completely available to the frontmost application. The frontmost
application should always present a great experience to the user.
Allowing background applications to hog the graphics processor might
prevent that. Your application must ensure that all previously
submitted commands have been finished prior to moving into the
background.
The docs go on to enumerate a set of guidelines for the enter background/foreground app delegate callbacks. I think finding a way to do the rendering without the graphics hardware would be tricky, also the frameworks that allow mp4 encoding (like ffMpeg) are mainly GPL/LGPL, so you need to be careful if dealing with a commercial product (LGPL means you can link to the library dynamically, not statically, which is useless on iOS), as the license would propagate to your code.
Related
I've built a small Cocoa app that uses OpenGL to draw some light content. I've used CAOpenGLLayer. While the app itself is very small and works blazingly fast, the launch time of the app is not satisfactory at all. At random times the app would stall for about a second or two upon launch, before showing the content.
I've narrowed down the problem and found that the bottleneck is the CGLChoosePixelFormat() function that is called during OpenGL initialization. It literally takes ~1 second to execute.
For a clean experiment, I created a blank Cocoa app and added an NSOpenGLView to the window. Immediately, the app launch time has grown by 1-2 seconds, for the same reason.
Is there a way to fight this problem? There seems no way to avoid using CGLChoosePixelFormat(), it's essential for getting OpenGL to work on a Mac.
Also, it's said that Core Animation is built upon OpenGL under the hood, but my Core Animation apps do not exhibit this slow startup problem at all. Also, I tried a symbolic breakpoint on CGLChoosePixelFormat in a Core Animation app, but it doesn't trigger. So Core Animation is either not using OpenGL or there is a way to initialize it in a different way. Does anybody has a solution?
P.S. I know Metal is now the way to go for 3D graphics on Macs, but I need to do this particular project on OpenGL for backward compatibility reasons.
I've met the exactly same problem with you. So far I've found that the problem only occurs on MacBook Pro 16" 2019.
Since this problem is hard to reproduce, I can do nothing more than guess. For now I have changed the NSOpenGLPixelFormat initWithAttributes: method to [NSOpenGLView defaultPixelFormat], and hope it will work.
I have two issues in my Metal App.
My call to currentPassDescriptor is stalling. I have too many drawables, apparently.
I'm wholly confused on how to most performantly configure the multiple MTKViews I am using.
Issue (1)
I have a problem with currentPassDescriptor in my app. It is occasionally blocking (for 1.00s) which, according to the docs, is because there is no currentDrawable available.
Background: I have 4 HD 1920x1080 videos playing concurrently, tiled out onto a 3840x2160 second external display as a debugging configuration. The pixel buffers of these AVPlayer instances are captured by 4 independent CVDIsplayLink callbacks and, from within the callback, there is the draw call to its assigned MTKView. A total of 4 MTKViews are subviews tiled on a single NSWindow, and are configured for manual drawing.
I'm using CVDisplayLink callbacks manually. If I don't, then I get stutter when mousing up on the app’s menus, for example.
Within each draw call, I do a bit of kernel shader work then attempt to obtain the currentPassDescriptor. If successful, I do one pass of a fragment/vertex shader and then present the drawable. My code flow follows Apple’s sample code as well as published examples.
According to the Metal System Trace, most of draw calls take under 5ms. The GPU is about 20-25% utilized and there’s about 25% of the GPU memory free. I can also cause the main thread to usleep() for 1 second without any hiccups.
Without any user interaction, there’s about a 5% chance of the videos stalling out in the first minute. If there’s some UI work going then I see that as windowServer work in Instruments. I also note that AVFoundation seems to cache about 15 frames of video onto the GPU for each AVPlayer.
If the cadence of the draw calls is upset, there's about a 10% chance that things stall completely or some of the videos -- some will completely stall, some will stall with 1hz updates, some won't stall at all. There's also less chance of stalling when running Metal System Trace. The movies that have stalled seem to have done so on obtaining a currentPassDescriptor.
This is really a poor design to have this currentPassDescriptor block for ≈1s during a render loop. So much so that I’m thinking of eschewing the MTKView all together and just drawing to a CAMetalLayer myself. But the docs on CAMetalLayer seem to indicate the same blocking behaviour will occur.
I also grab these 4 pixel buffers on the fly and render sub-size regions-of-interest to 4 smaller MTKViews on the main monitor; but the stutters still occur if this code is removed.
Is the drawable buffer limit per MTKView or per the backing CALayer? The docs for maximumDrawableCount on CAMetalLayer say the number needs to be 2 or 3. This question ties into the configuration of the views.
Issue (2)
My current setup is a 3840x2160 NSWindow with a single content view. This subclass of NSView does some hiding/revealing of the mouse cursor by introducing an NSTrackingRectTag. The MTKViews are tiled subviews on this content view.
Is this the best configuration? Namely, one NSWindow with tiled MTKViews… or should I do one MTKView per window?
I'm also not sure how to best configure these windows/layers — ie. by setting (or clearing) wantsLayer, wantsUpdateLayer, and/or canDrawSubviewsIntoLayer. I'm currently just setting wantsLayer to YES on the single content view. Any hints on this would be great.
Does adjusting these properties collapse all the available drawables to the backing layer only; are there still 2 or 3 per MTKView?
NB: I've attached a sample run of my Metal app. The longest 'work' on the top graph is just under 5ms. The clumps of green/blue are rendering on the 4 MTKViews. The 'work' alternates a bit because one of the videos is a 60fps source; the others are all 30fps.
I'm very new to objective C, that's actually my first app...
I'm working on an application which has a list of projects, each project has its own gallery of images, segues וarc. the gallery takes about 90% of the screen, and a row of thumbnail takes the rest.
It is running OK on the simulator, but when I get out of one gallery to another (somtimes after three or four passages) the application crashes (on real device - iPad2 with IOS 6).
There is no exception or error, the log is clean. It seems to crash when the application reaches 350MB of RAM.
I believe there is no memory release between passing trough the galleries, even though I am usingבarc וsegues.
In addition, on the first entry to each gallery, it takes a few seconds for the gallery to load (only on first run, if I exit and reenter the same gallery it enters immidiatly) which seems further clue that it is kept in the memory.
I would really appriciate any idea, even if it simple as this is a first app and I'm not very experienced.
Thanks for your time and help...
I am not sure the exact reason for this is the memory issue. but when you handling the big payload(data) on your project,you have to think about what happen the memory reach the maximum reachable data size allocate for the app at the time.
Thanx for the API you have a call back method when fires the app reach the maximum Data size the system can handle.
- (void)didReceiveMemoryWarning{
// in this metod you can remove(release) additional memory used by your view controller
// in your case UIImage objects of the gallery.
}
You can't call this method directly. it's a system call method.
but you have a option to ask to call method, when you debug in the simulator.
(Simulator Status Bar: -- Hardware > Simulate Memory Warning).
documentation here
I have a C++ DirectX-based third-party game engine compiled into a Windows Phone Runtime Component DLL. I'm working on integrating it into a project based off of a Windows Phone Direct3D with XAML App. The game engine DLL uses the the D3D device, context and render view texture provided by the application's Direct3DBackground::Draw() method.
The built-in renderer from the sample is gone and replaced by the game engine's.
I can render but there is constant black flickering. Every other frame is black. To prove to myself that it wasn't the renderer (which has been proven to work elsewhere), I cut out all the rendering code from the game engine DLL to a simply setting a clear color. The result is still the same.
At first I thought it was because the Direct3DXamlAppComponent generated by the sample was maybe running in a different thread from the game engine DLL, but that's not the case. They're on the same thread.
What rendering problem could this configuration be causing?
Does the game engine's renderer need a separate d3d device?
Does the game engine's renderer need a separate d3d device context?
Things I haven't tried yet:
creating a second d3d device on the DLL
converting the game engine to provide its own IDrawingSurfaceManipulationHandler. But I'm not sure if it'll just have the same problem as above.
The problem came from the render target view. I didn't realize that the pointer to it gets updated every frame. I had just set it to the game engine renderer once at start up. Now I update the render view target pointer every frame and now the black flickers are gone.
Is there anyway, we can identify if the process has user interface? Anything which deals with drawing on screen.
I looked at this question, but it does not give proper answer. I am inside the process. Is there some api where i can query, or some technique?
EDIT: I am inside the process, I am doing code injection. I want to behave different for non gui apps than gui ones. I cannot use CFBundleGetValueForInfoDictionaryKey() or NSApplicationMain as I want to keep injection module light and thus I cannot link against these frameworks.
This depends heavily on what you mean by "has a graphical interface." Do you mean, "is displaying a window?" Or "has a dock icon?" Or "has access to the Aqua context such that it could display a window if it wanted to?" Or "is linked with AppKit?"
An app bundle that does not have a dock icon will have the Info.plist key LSUIElement set to "1". You can fetch this using CFBundleGetValueForInfoDictionaryKey() (or the NSBundle equivalent if you're in Objective-C). This does not mean the application has no GUI, but does mean it won't show up in the dock. Many LSUIElement apps have a status item UI.
You can also check that you actually have an Info.plist at all using CFBundleCopyBundleURL() or the like. If you don't have an Info.plist, then you're not going to be a "GUI-like" program in all likelihood. (Though again, it's possible to generate GUIs without this).
You can use weak linking to test for AppKit:
if (NSApplicationMain != NULL) {
// We're linked with AppKit
}
That's a fairly good indicator that you're going to have a UI. But it won't catch older Carbon apps.
Some more background on what you mean by "I am inside the process" would be helpful. I assume you're a framework and want to behave differently for GUI apps than for non-GUI apps?
To detect an application capable of drawing on the screen, I would check for whether CoreGraphics is linked. That'll only work for programs built since 10.3, but should be fairly accurate (I believe that old QuickDraw apps still link Core Graphics in 10.3+, but I don't have one handy to check). Probably the best way is to do a weak linking check against CGColorCreate() since it's been around for a long time and is unlikely to ever go away:
extern CGColorRef CGColorCreate(CGColorSpaceRef space, const CGFloat components[])
__attribute__((weak_import));
...
if (CGColorCreate != NULL) {
// Linked with CoreGraphics. Probably a GUI
Of course things can link with CoreGraphics and be capable of drawing on the screen, but never actually draw on the screen. They might link with CoreGraphics in order to do image processing. It might be more accurate by looking for ApplicationServices. You could test ApplicationServicesVersionNumber against NULL, but that's not documented.
This will not catch X apps, since they don't technically draw on the screen. They send commands to X11.app, which is a GUI. Do you consider /usr/X11/bin/xterm a GUI for this purpose? /usr/X11/bin/xeyes?
There is no particular flag I'm aware of in any app that says unambiguously "I draw on the screen." Drawing on the screen is not something you need to pre-declare. Apps that draw on the screen don't have to do so every time they run. Apps that are generally invisible might optionally or occasionally create a status item. It's hard to come up with a single description that includes GrowlHelperApp.app, Pages.app, and /usr/X11/bin/xeyes.