My app runs well at launch. Normally, it only use 0.2% CPU while running.
But keep using the app day after day, it now costs 15% CPU, which is really huge for me.
I think thing goes wrong, after I sleep my Macbook many times. I don't turn off my Macbook.
I don't know where to investigate this bug?
PS: my app uses many NSTimer, which is added to NSRunLoopCommonModes
Thanks,
The only real answer is: Profile and see where the time is being used.
In sleep mode, the operating system and other programs generally don't do much or anything. If your app continues to loop and ignores sleep mode, your percentual CPU usage will go up because other programs use less.
Ideally your app should check for sleep mode and then adjust its behaviour, e.g. suspend the loop.
Related
I am creating an application for Mac, in Objective C, which will run in the Menu-bar and do periodic Desktop operations (such as changing the wallpaper). I am creating the application so that it stays in the Menu bar at all times, allowing easy access to configuration options and other information. My main concern is how to schedule my app to run every X minutes to do the desktop operations.
The most common solution I have seen is using NSTimer, however, I am concerned that it will not be memory efficient (after reading the following page on Apple Developer docs. Using an NSTimer will prevent the laptop from going to sleep, and will need an always-running thread to check for when the NSTimer has elapsed. Is there a more memory-efficient way of using NSTimer to schedule these operations?
Alternately, is there a way to use LaunchD to initiate a call to my application (which is in the Menu bar) so that it can handle the event and do the desktop operations. I think that the second way is better, but am not sure if it is possible.
First, excellent instincts on keeping this low-impact. But you're probably over-worried in this particular case.
When they say "waking the system from an idle state" they don't mean system-level "sleep" where the screen goes black. They mean idle state. The CPU can take little mini-naps for fractions of a second when there isn't work that immediately needs to be done. This can dramatically reduce power requirements, even while the system is technically "awake."
The problem with having lots of timers flying around isn't so much their frequencies as their tolerances. Say one you have 10 timers with a 1 second frequency, but they're offset from each other by 100ms (just by chance of what time it was when they happened to start). That means the longest possible "gap" is 100ms. But if they were configured at 1 second with a 0.9 second tolerance (i.e. between 1s and 1.9s), then the system could schedule them all together, do a bunch of work, and spend most of the second idle. That's much better for power.
To be a good timer citizen, you should first set your timer at the interval you really want to do work. If it is common for your timer to fire, but all you do is check some condition and reschedule the timer, then you're wasting power. (Sounds like you already have this in hand.) And the second thing you should do is set a reasonable tolerance. The default is 0 which is a very small tolerance (it's not actually "0 tolerance," but it's very small compared to a minutes). For your kind of problem, I'd probably use a tolerance of at least 1s.
I highly recommend the Energy Best Practices talk from WWDC 2013. You may also be interested in the later Writing Energy Efficient Code sessions from 2014 and Achieving All-day Battery Life from 2015.
It is possible of course to do with with launchd, but it adds a lot of complexity, especially on installation. I don't recommend it for the problem you're describing.
I first learned about how computers work in terms of a primitive single stored program machine.
Now I'm learning about multitasking operating systems, scheduling, context switching, etc. I think I have a fairly good grasp of it all, except for one thing. I have always thought of a CPU as something which is just charging forward non-stop. It always knows where to go next (program counter), and it goes to that instruction, etc, ad infinitum.
Clearly this is not the case since my desktop computer CPU is not always running at 100%. So how does the CPU shut itself off or throttle itself down, and what role does the OS play in this? I'm guessing there's an input on the CPU somewhere which allows it to power down... and the OS can set this if it has nothing to schedule, but the next logical question is how does it start back up again? I'm guessing either one of two things:
It never shuts down completely, just runs at a very low frequency waiting for the scheduler to get busy again
It shuts down completely but is woken up by interrupts
I searched all over for info on this and came up fairly empty-handed. Any insight would be much appreciated.
The answer is that is depends on the hardware, the operating system and the way that the operating system has been configured.
And it could involve either or both of the strategies you proposed.
Another possibility for machines based on the x86 architecture, is that x86 has an HLT instruction that causes the core to stop until it receives an external interrupt. So the "Idle" task could simply execute HLT in a tight loop.
Just go to task manager, performance tab, and watch the cpu usage while you're doing absolutely nothing on your computer. it never stops fluctuating. Having an operating system like windows running, the cpu is going to ALWAYS be functioning, it never completely shuts down.
Having your monitor display an image requires your cpu to process a function allowing it to display anything. etc.
Everything runs through the CPU, just like your brain, it controls everything. nothing would function without it.
Some CPUs do have a 'wait for interrupt' instruction which allows the CPU to stop executing instructions when there is nothing to do, and will not re-awake until there is an interrupt event. This is particularly useful in microcontrollers, where they can sit for long periods of time waiting for something to happen.
Intel = HLT (Halt)
ARM = WFI (Wait for interrupt)
Sometimes a 'busy wait' is also used, where the CPU sits in a little 'idle' loop, checking for things to do. In this case, the CPU is still running instructions, but the operating system is in an idle state. It's not as efficient as using a HLT.
Modern CPUs can also adjust their power usage, and are capable of reducing clock rates, or shutting down parts of the CPU that aren't being used. In this way, power usage during an active idle state can be less than during active processing, even though the core CPU is still running and executing instructions.
If speaking about x86 architecture when an operating system has nothing to do it can use HLT instruction.
HLT instruction stops the CPU till next interrupt.
See http://en.m.wikipedia.org/wiki/HLT for details.
Other architectures have similar instruction to give CPU a rest.
Is there a tool to see CPU usage of a task/thread on a Symbian^3 phone?
Using PerfMon I only see the global CPU usage.
Yep, I didn't put the CPU count per process there, since you can not really get it. Also the global CPU measurement is a bit fake to be fully honest. its simple having a idle-priority timer going off, and if it goes off everytime then there is no CPU load, and if it never gets off, then the CPU load is 100 %. In efect its a bit relative, but have to say it has worked on the tasks I have needed it for..
BTW, re-did the site, and now all Symbian apps are at: http://www.drjukka.com/YBrowser.html, lets see how long it takes me to push all codes into the Github, anyway, they'll end up there hopefully before 7th birthday of the Y-Tasks app.
May be Dr. Yukka's Y-Task application will be useful: http://www.drjukka.com/YTasks.html
I am writing an application for OS X (Obj-C/Cocoa) that runs a simulation and displays the results to the user. In one case, I want the simulation to run in "real-time" so that the user can watch it go by at the same speed it would happen in real life. The simulation is run with a specific timestep, dt. Right now, I am using mach_absolute_time() to slow down the simulation. When I profile this code, I see that by far, most of my CPU time is spent in mach_absolute_time() and my CPU is pegged at 100%. Am I doing this right? I figured that if I'm slowing down the simulation such that the program isn't simulating anything most of the time then CPU usage should be down but mach_absolute_time() obviously isn't a "free call" so I feel like there might be a better way?
double nextT = mach_absolute_time();
while (runningSimulation)
{
if (mach_absolute_time() >= nextT)
{
nextT += dt_ns;
// Compute the next "frame" of the simulation
// ....
}
}
Do not spin at all.
That is the first rule of writing GUI apps where battery life and app responsiveness matter.
sleep() or nanosleep() can be made to work, but only if used on something other than the main thread.
A better solution is to use any of the time based constructs in GCD as that'll make more efficient use of system resources.
If you want the simulation to appear smooth to the user, you'll really want to lock the slowed version to the refresh rate of the screen. On iOS, there is CADisplayLink. I don't know of a direct equivalent on the Mac.
You are doing busy spinning. If there is lot of time before you need to simulate again considering sleeping instead.
But any sleep won't guarantee that it would sleep exactly for the duration specified. Depending on how accurate you want it to be, you can sleep for a little less and afterwards spin for the rest.
This might seem weird, but I'm interesting in creating an electric heater out of my computer, that is program an application, that heats up my PC, and I need some help.
I currently made an application, that runs infinite loops on the GPU (using a little shader), and on the CPU cores, however I'm interesting in getting the ram going too, as well as the several output ports, so.. About the ram heating, just allocate, and start randomly accessing and writing using all 8 cores?
And what about triggering CD-ROM, floppy etc, how do I do this?
How about heater with a purpose? Just run World Community Grid, create tons of heat while making your computer do valuable computations for science. It runs the processors wide open, is stable, and isn't just wasting cycles.
Have a look at How to stress test a computer If your interested in making your own try searching for open source stress test software that you could modify to your liking.
Use Furmark together with LinX/Prime95. Max out your settings. Make sure you have a strong enough PSU.
There`s a torture test option for CPU & RAM in Prime95 that looks like what you want. As for the GPU, there is Furmark which achieves the same kind of stress.
The heat from the other components will likely be not relevant (unless you have something really specific like a physx card) if you stress enough your cpu and gpu imho.