My goal is to identify when an animation is running on lower-end hardware (lower-end GPU) and change the animation or storyboard to deliver a better user experience (or less animation).
Is it possible to determine an animation's frame rate?
Technically there is no way to get access to the frame rate. You could detect the amount of time between each report of the CompositionTarget.Rendering event, but that event only tracks when the animation system has finished sending its updates to the rendering subsystem (which very well may decide to skip frames depending on the state of the graphics card).
There's a very interesting discussion on tracking WPF framerate on the MSDN forums here. Pretty much all of that should be applicable to Windows Store XAML apps.
If all you're wanting to do is swap out animations if they're taking too long to run, CompositionTarget.Rendering may be suitable for what you need.
Related
I am creating an application for Mac, in Objective C, which will run in the Menu-bar and do periodic Desktop operations (such as changing the wallpaper). I am creating the application so that it stays in the Menu bar at all times, allowing easy access to configuration options and other information. My main concern is how to schedule my app to run every X minutes to do the desktop operations.
The most common solution I have seen is using NSTimer, however, I am concerned that it will not be memory efficient (after reading the following page on Apple Developer docs. Using an NSTimer will prevent the laptop from going to sleep, and will need an always-running thread to check for when the NSTimer has elapsed. Is there a more memory-efficient way of using NSTimer to schedule these operations?
Alternately, is there a way to use LaunchD to initiate a call to my application (which is in the Menu bar) so that it can handle the event and do the desktop operations. I think that the second way is better, but am not sure if it is possible.
First, excellent instincts on keeping this low-impact. But you're probably over-worried in this particular case.
When they say "waking the system from an idle state" they don't mean system-level "sleep" where the screen goes black. They mean idle state. The CPU can take little mini-naps for fractions of a second when there isn't work that immediately needs to be done. This can dramatically reduce power requirements, even while the system is technically "awake."
The problem with having lots of timers flying around isn't so much their frequencies as their tolerances. Say one you have 10 timers with a 1 second frequency, but they're offset from each other by 100ms (just by chance of what time it was when they happened to start). That means the longest possible "gap" is 100ms. But if they were configured at 1 second with a 0.9 second tolerance (i.e. between 1s and 1.9s), then the system could schedule them all together, do a bunch of work, and spend most of the second idle. That's much better for power.
To be a good timer citizen, you should first set your timer at the interval you really want to do work. If it is common for your timer to fire, but all you do is check some condition and reschedule the timer, then you're wasting power. (Sounds like you already have this in hand.) And the second thing you should do is set a reasonable tolerance. The default is 0 which is a very small tolerance (it's not actually "0 tolerance," but it's very small compared to a minutes). For your kind of problem, I'd probably use a tolerance of at least 1s.
I highly recommend the Energy Best Practices talk from WWDC 2013. You may also be interested in the later Writing Energy Efficient Code sessions from 2014 and Achieving All-day Battery Life from 2015.
It is possible of course to do with with launchd, but it adds a lot of complexity, especially on installation. I don't recommend it for the problem you're describing.
I am evaluating some code that is stacking calls to beginBackgroundTaskWithExpirationHandler in a effort to leave a timer in the background. Have to admit that it is a pretty clever idea, but not sure if this is best practice.
So the flow:
Call beginBackgroundTaskWithExpirationHandler with a callback handler
When it returns, do something, then call again
Rinse and repeat, checking for TaskInvalid along the way
I know that 180 seconds is the max time, but that this can be shorter in some cases.
To the questions:
1: Is this legal?
2: Would you suspect that Apple would be OK with giving the app 3 minutes of background over and over, thus leaving the process in the background for say a hour?
3: Would you count on this?
Thanks in advance!
Would you suspect that Apple would be OK with giving the app 3 minutes of background over and over, thus leaving the process in the background for say a hour?
No, Apple would not be OK with it, even if you could do it. They specified a 3 minute limit for a reason, to ensure that we don't have apps running in the background without the user's knowledge, consuming CPU cycles, memory, and draining the battery. (In fact, the "finite task" limit used to be 5 minutes, but several years ago Apple further restricted it to just 3 minutes.) Imagine a world where all app developers were routinely circumventing this 3 minute limit, our devices would have their batteries drained quickly and foreground apps would be less responsive and have less memory with which to operate.
Having said that, Apple has identified a very narrow set of operations that are acceptable to keep running in the background (VOIP, navigation apps, music apps, bluetooth operation, etc.), where the user has reasonable expectations about the battery and performance impact.
There are are also classes of tasks that employ some limited background capabilities (e.g. requesting time to complete some finite-length task, opportunistic periodic background fetch, significant change location services, giving the user a chance to respond to push or local notifications, etc.). The intent is to offer a meaningful balance between background capabilities while minimizing battery impact.
Bottom line, Apple otherwise discourages/curtails the use of indiscriminate background operation. In the App Store Guidelines, they explicitly say
2.16 Multitasking Apps may only use background services for their intended purposes: VoIP, audio playback, location, task completion, local notifications, etc.
...
4.5 Apps using background location services must provide a reason that clarifies the purpose of the use, using mechanisms described in the Human Interface Guidelines
Having said all of this, if you describe what precisely you need this background operation for, we might be able to describe which of the multitude of different background capabilities that Apple offers you could use to achieve the desired affect. All of these interfaces are designed to solve specific problems while balancing functionality with the scarce resources on our devices. But if it's something like "hey, I want to ping my server every five minutes", then no, Apple will frown upon that.
For more information, this is discussed in some detail in the Background Execution chapter of the App Programming Guide for iOS.
I've a question regarding IOSurface on Cocoa.
After an extensive research required to switch my OPENGL realtime application to 64 bit, I've taken the only path to support Quicktime playback spawning a background thread that pulls the frames installing a frame-ready callback and then with QTVisualContextCopyImageForTime , and pass the IOSurfaceRef through RPC to the parent process.
Everything works fine but there's one main issue. In my 32 bit application I was able to serialize any call to the GL subsystem by rendering a frame, pull the QT frames for the next pass and then wait for the next V-sync. This produced a very smooth and stable result.
Using the IOSurface technique gives me no way to synchronize when my app draws a frame and when the background process pulls the IOSurface from the quicktime movie. The result is that, on a random basis, I experience performances SPYKES. Indeed using the OPENGL driver monitor raises the CPU WAIT cycles up to 10% in my 64 bit app, while I have 0% CPU Wait graph under 32 bit.
Anyone here used IOSurface in a real world application and faced issues like this one ? I've though about an interprocess mutex/lock , but considering I need to lock/unlock about 120 times x second, I was not able to find a valid solution, it doesn't seems that darwin has something like the NAMED SIGNALS available in Win32...
Any suggestion, or I should take a totally different approach to the problem ?
Thanks !
Leopard 10.5.8, XCode 3.1.1; using runModalForWindow to implement (what is intended to be) a high performance mouse tracking mechanism where I have to do real-time complex bitmap modifications.
The modal loop runs, the timer fires, the mouse tracks... but the performance is abysmal, and it gets worse and worse the longer the runloop goes on. Instead of catching mouse messages every pixel or so, I get them every 5... 10.... 20 seconds.
Instruments shows that the majority of the time during this growing response bottleneck is being spent in mach_msg_trap (and yes, I have the perspective set to the running app), so the impression I am under is that it "thinks" it doesn't have any work to do, despite the fact that I'm dragging the mouse around with the button held down like a crazy person. There are no memory leaks showing up, and in my 8-core 2.8 GHZ machine, there's almost no CPU activity going on.
Again, the app is not spending much time in my code... so it's not a performance problem of mine. I've probably configured something wrong, or failed to configure it at all, or am simply approaching the whole idea wrong -- but I sure would appreciate some insight here. As it stands now, the dispatch of mouse messages and timer messages is absolutely unacceptable. You couldn't implement a crayon drawing program for someone immersed in cold molasses with the response times I'm getting.
EDIT: Some additional info: doesn't happen on my 10.5.8 macbook pro. Just the 8--core, 6-display Mac Pro. I tried taking the display code for the croprect in drawrect out, replaced it with an NSLog()... still drags on issuing mouse updates. Also tried rebooting and running without the usual complement of apps running. And with mirrored displays. No difference.
Imagine dragging a brush across the screen; at first, is paints smoothly, then gaps appear between brush placements, then they get larger, and this goes on until you're only getting one brush placement every 10 seconds. That's how this acts. Using NSlog() and various other tracking methods, I've determined that it is at least at the highest level occurring because the mouseDragged events slow down to a trickle. The question in a nutshell is, why would that happen?
Anyone?
OK, isolated it -- the problem comes from my Wacom Tablet mouse. Plug in a regular optical mouse, and everything runs great. Same thing on my Macbook pro, using the trackpad. Works fine.
The tablet is a Wacom Intuos 4 with the stock drivers as of January 2011. I'm going over to the Wacom site and reporting this next.
What a nightmare that was. I have spent over 100 hours on this, thinking I'd hosed some subtlety in the app handling, drawing, etc. Sheesh.
I was contemplating the fact that inflicting frame per second restrictions is not ideal in regards to latency and performance since a monitor still has a chance to show something sooner (assuming no vertical sync which is mainly for stability).
However, it occurred to me that process might have ramifications on audio subsystems (or also input devices), assuming frames per second also governs the global loop of the engine itself (the case in almost all 3D accelerated applications).
Do audio cards/audio devices on an operating system have a concept of "Hertz" that might be related? Do we assume the faster the global loop of the application the better the latency for the audio subsystems?
Audio is not typically affected at all. The audio has a separate buffer which you fill up in advance, maybe a hundred milliseconds or more, and the sound card plays back from that regardless of what your game loop is doing.
It is possible, if you fill that buffer in your game loop, that taking too long to get back to the game loop will result in the sound buffer being empty and a looping sound being heard. To avoid this, developers will either use a big buffer or fill the buffer from a background thread.
As you might guess, audio is already running at some sort of latency, proportional to the amount of data in the buffer that the game attempts to keep in there at all times. This is usually not so noticeable since sound takes a non-negligible time to travel in real life anyway. Pro audio applications have to keep this buffer small for low latency and responsiveness, but they don't have graphical frames to worry about...
As far as input, yes, it is often affected. If a game does not decouple the rendering rate from the input handling rate, then there will be some additional latency on the way in. There will always be some additional perceived latency on the way out too, if you consider input latency as the length of time between performing an action and seeing it take effect. But that perceived latency may well be larger than the simulation latency, since the affected entity may have been altered at an earlier timestep than the one displayed in the next frame.
Typically the game processing and the display aren't coupled, so reducing the FPS of the display won't affect the central game processing, which would include things like input (and presumably audio).
This article explains it pretty well, including different options for having FPS linked to game speed or not.